The still fresh-in-post boss of Instagram, Adam Mosseri, has been asked to meet the UK’s health secretary, Matt Hancock, to discuss the social media platform’s handling of content that promotes suicide and self harm, the BBC reports.
Mosseri’s summons follows an outcry in the UK over disturbing content being recommended to vulnerable users of Instagram, following the suicide of a 14 year old schoolgirl, Molly Russell, who killed herself in 2017.
After her death, Molly’s family discovered she had been following a number of Instagram accounts that encouraged self-harm. Speaking to the BBC last month Molly’s father said he did not doubt the platform had played a role in her decision to kill herself.
Writing in the Telegraph newspaper today, Mosseri makes direct reference to Molly’s tragedy, saying he has been “deeply moved” by her story and those of other families affected by self-harm and suicide, before going on to admit that Instagram is “not yet where we need to be on the issues”.
“We rely heavily on our community to report this content, and remove it as soon as it’s found,” he writes, conceding that the platform has offloaded the lion’s share of responsibility for content policing onto users thus far. “The bottom line is we do not yet find enough of these images before they’re seen by other people,” he admits.
Mosseri then uses the article to announce a couple of policy changes in response to the public outcry over suicide content.
Beginning this week, he says Instagram will begin adding “sensitivity screens” to all content it reviews which “contains cutting”. “These images will not be immediately visible, which will make it more difficult for people to see them,” he suggests.
Though that clearly won’t stop fresh uploads from being distributed unscreened. (Nor prevent young and vulnerable users clicking to view disturbing content regardless.)
Mosseri justifies Instagram’s decision not to blanket-delete all content related to self-harm and/or suicide by saying its policy is to “allow people to share that they are struggling even if that content no longer shows up in search, hashtags or account recommendations”.
“We’ve taken a hard look at our work and though we have been focused on the individual who is vulnerable to self harm, we need to do more to consider the effect of self-harm images on those who may be inclined to follow suit,” he continues. “This is a difficult but important balance to get right. These issues will take time, but it’s critical we take big steps forward now. To that end we have started to make changes.”
Another policy change he reveals is that Instagram will stop its algorithms actively recommending additional self-harm content to vulnerable users. “[F]or images that don’t promote self-harm, we let them stay on the platform, but moving forward we won’t recommend them in search, hashtags or the Explore tab,” he writes.
Unchecked recommendations have opened Instagram up to accusations that it essentially encourages depressed users to self-harm (or even suicide) by pushing more disturbing content into their feeds once they start to show an interest.
So putting limits on how algorithms distribute and amplify sensitive content is an obvious and overdue step — but one that’s taken significant public and political attention for the Facebook-owned company to make.
Last year the UK government announced plans to legislate on social media and safety, though it has yet to publish details of its plans (a white paper setting out platforms’ responsibilities is expected in the next few months). But just last week a UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect minors.
In a statement given to the BBC, the Department for Digital, Culture, Media and Sport confirmed such a legal duty remains on the table. “We have heard calls for an internet regulator and to place a statutory ‘duty of care’ on platforms, and are seriously considering all options,” it said.
There’s little doubt that the prospect of safety-related legislation incoming in a major market for the platform — combined with public attention on Molly’s tragedy — has propelled the issue to the top of the Instagram chief’s inbox.
Mosseri writes now that Instagram began “a comprehensive review last week” with a focus on “supporting young people”, adding that the revised approach entails reviewing content policies, investing in technology to “better identify sensitive images at scale” and applying measures to make such content “less discoverable”.
He also says it’s “working on more ways” to link vulnerable users to third party resources, such as by connecting them with organisations it already works with on user support, such as Papyrus and Samaritans. But he concedes the platform needs to “do more to consider the effect of self-harm images on those who may be inclined to follow suit” — not just on the poster themselves.
“This week we are meeting experts and academics, including Samaritans, Papyrus and Save.org, to talk through how we answer these questions,” he adds. “We are committed to publicly sharing what we learn. We deeply want to get this right and we will do everything we can to make that happen.”
We’ve reached out to Facebook, Instagram’s parent, for further comment.
One way user-generated content platforms could support the goal of better understanding impacts of their own distribution and amplification algorithms is to provide high quality data to third party researchers so they can interrogate platform impacts.
That was another of the recommendations from the UK’s science and technology committee last week. But it’s not yet clear whether Mosseri’s commitment to sharing what Instagram learns from meetings with academics and experts will also result in data flowing the other way — i.e. with the proprietary platform sharing its secrets with experts so they can robustly and independently study social media’s antisocial impacts.
Recommendation algorithms lie at center of many of social media’s perceived ills — and the problem scales far beyond any one platform. YouTube’s recommendation engines have, for example, also long been criticized for having a similar ‘radicalizating’ impact — such as by pushing viewers of conservative content to far more extreme/far right and/or conspiracy theorist views.
With the huge platform power of tech giants in the spotlight, it’s clear that calls for increased transparency will only grow — unless or until regulators make access to and oversight of platforms’ data and algorithms a legal requirement.