AI

Instagram’s Adam Mosseri to meet UK health secretary over suicide content concerns

Comment

Adam Mosseri at TechCrunch Disrupt

The still fresh-in-post boss of Instagram, Adam Mosseri, has been asked to meet the UK’s health secretary, Matt Hancock, to discuss the social media platform’s handling of content that promotes suicide and self harm, the BBC reports.

Mosseri’s summons follows an outcry in the UK over disturbing content being recommended to vulnerable users of Instagram, following the suicide of a 14 year old schoolgirl, Molly Russell, who killed herself in 2017.

After her death, Molly’s family discovered she had been following a number of Instagram accounts that encouraged self-harm. Speaking to the BBC last month Molly’s father said he did not doubt the platform had played a role in her decision to kill herself.

Writing in the Telegraph newspaper today, Mosseri makes direct reference to Molly’s tragedy, saying he has been “deeply moved” by her story and those of other families affected by self-harm and suicide, before going on to admit that Instagram is “not yet where we need to be on the issues”.

“We rely heavily on our community to report this content, and remove it as soon as it’s found,” he writes, conceding that the platform has offloaded the lion’s share of responsibility for content policing onto users thus far. “The bottom line is we do not yet find enough of these images before they’re seen by other people,” he admits.

Mosseri then uses the article to announce a couple of policy changes in response to the public outcry over suicide content.

Beginning this week, he says Instagram will begin adding “sensitivity screens” to all content it reviews which “contains cutting”. “These images will not be immediately visible, which will make it more difficult for people to see them,” he suggests.

Though that clearly won’t stop fresh uploads from being distributed unscreened. (Nor prevent young and vulnerable users clicking to view disturbing content regardless.)

Mosseri justifies Instagram’s decision not to blanket-delete all content related to self-harm and/or suicide by saying its policy is to “allow people to share that they are struggling even if that content no longer shows up in search, hashtags or account recommendations”.

We’ve taken a hard look at our work and though we have been focused on the individual who is vulnerable to self harm, we need to do more to consider the effect of self-harm images on those who may be inclined to follow suit,” he continues. “This is a difficult but important balance to get right. These issues will take time, but it’s critical we take big steps forward now. To that end we have started to make changes.”

Another policy change he reveals is that Instagram will stop its algorithms actively recommending additional self-harm content to vulnerable users. “[F]or images that don’t promote self-harm, we let them stay on the platform, but moving forward we won’t recommend them in search, hashtags or the Explore tab,” he writes.

Unchecked recommendations have opened Instagram up to accusations that it essentially encourages depressed users to self-harm (or even suicide) by pushing more disturbing content into their feeds once they start to show an interest.

So putting limits on how algorithms distribute and amplify sensitive content is an obvious and overdue step — but one that’s taken significant public and political attention for the Facebook-owned company to make.

Last year the UK government announced plans to legislate on social media and safety, though it has yet to publish details of its plans (a white paper setting out platforms’ responsibilities is expected in the next few months). But just last week a UK parliamentary committee also urged the government to place a legal ‘duty of care’ on platforms to protect minors.

In a statement given to the BBC, the Department for Digital, Culture, Media and Sport confirmed such a legal duty remains on the table. “We have heard calls for an internet regulator and to place a statutory ‘duty of care’ on platforms, and are seriously considering all options,” it said.

There’s little doubt that the prospect of safety-related legislation incoming in a major market for the platform — combined with public attention on Molly’s tragedy — has propelled the issue to the top of the Instagram chief’s inbox.

Mosseri writes now that Instagram began “a comprehensive review last week” with a focus on “supporting young people”, adding that the revised approach entails reviewing content policies, investing in technology to “better identify sensitive images at scale” and applying measures to make such content “less discoverable”. 

He also says it’s “working on more ways” to link vulnerable users to third party resources, such as by connecting them with organisations it already works with on user support, such as Papyrus and Samaritans. But he concedes the platform needs to “do more to consider the effect of self-harm images on those who may be inclined to follow suit” — not just on the poster themselves. 

“This week we are meeting experts and academics, including Samaritans, Papyrus and Save.org, to talk through how we answer these questions,” he adds. “We are committed to publicly sharing what we learn. We deeply want to get this right and we will do everything we can to make that happen.”

We’ve reached out to Facebook, Instagram’s parent, for further comment.

One way user-generated content platforms could support the goal of better understanding impacts of their own distribution and amplification algorithms is to provide high quality data to third party researchers so they can interrogate platform impacts.

That was another of the recommendations from the UK’s science and technology committee last week. But it’s not yet clear whether Mosseri’s commitment to sharing what Instagram learns from meetings with academics and experts will also result in data flowing the other way — i.e. with the proprietary platform sharing its secrets with experts so they can robustly and independently study social media’s antisocial impacts.

Recommendation algorithms lie at center of many of social media’s perceived ills — and the problem scales far beyond any one platform. YouTube’s recommendation engines have, for example, also long been criticized for having a similar ‘radicalizating’ impact — such as by pushing viewers of conservative content to far more extreme/far right and/or conspiracy theorist views.

With the huge platform power of tech giants in the spotlight, it’s clear that calls for increased transparency will only grow — unless or until regulators make access to and oversight of platforms’ data and algorithms a legal requirement.

More TechCrunch

Copilot, Microsoft’s brand of generative AI, will soon be far more deeply integrated into the Windows 11 experience.

Microsoft Build 2024: All the AI and hardware products Microsoft announced

Hello and welcome back to TechCrunch Space. For those who haven’t heard, the first crewed launch of Boeing’s Starliner capsule has been pushed back yet again to no earlier than…

TechCrunch Space: Star(side)liner

When I attended Automate in Chicago a few weeks back, multiple people thanked me for TechCrunch’s semi-regular robotics job report. It’s always edifying to get that feedback in person. While…

These 81 robotics companies are hiring

The top vehicle safety regulator in the U.S. has launched a formal probe into an April crash involving the all-electric VinFast VF8 SUV that claimed the lives of a family…

VinFast crash that killed family of four now under federal investigation

When putting a video portal in a public park in the middle of New York City, some inappropriate behavior will likely occur. The Portal, the vision of Lithuanian artist and…

NYC-Dublin real-time video portal reopens with some fixes to prevent inappropriate behavior

Longtime New York-based seed investor, Contour Venture Partners, is making progress on its latest flagship fund after lowering its target. The firm closed on $42 million, raised from 64 backers,…

Contour Venture Partners, an early investor in Datadog and Movable Ink, lowers the target for its fifth fund

Meta’s Oversight Board has now extended its scope to include the company’s newest platform, Instagram Threads, and has begun hearing cases from Threads.

Meta’s Oversight Board takes its first Threads case

The company says it’s refocusing and prioritizing fewer initiatives that will have the biggest impact on customers and add value to the business.

SeekOut, a recruiting startup last valued at $1.2 billion, lays off 30% of its workforce

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender SoLo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says