Media & Entertainment

Social media firms should face fines for hate speech failures, urge UK MPs

Comment

Image Credits: Twin Design (opens in a new window) / Shutterstock (opens in a new window)

Social media giants Facebook, YouTube and Twitter have once again been accused of taking a “laissez-faire approach” to moderating hate speech content on their platforms.

This follows a stepping up of political rhetoric against social platforms in recent months in the UK, following a terror attack in London in March — after which Home Secretary Amber Rudd called for tech firms to do more to help block the spread of terrorist content online.

In a highly critical report looking at the spread of hate, abuse and extremism on Facebook, YouTube and Twitter, a UK parliamentary committee has suggested the government looks at imposing fines on social media forms for content moderation failures.

It’s also calling for a review of existing legislation to ensure clarity about how the law applies in this area.

“Social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. We recommend that the government consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe,” the committee writes in the report.

Last month, the German government backed a draft law which includes proposals to fine social media firms up to €50 million if they fail to remove illegal hate speech within 24 hours after a complaint is made.

A Europe Union-wide Code of Conduct on swiftly removing hate speech, which was agreed between the Commission and social media giants a year ago, does not include any financial penalties for failure — but there are signs some European governments are becoming convinced of the need to legislate to force social media companies to improve their content moderation practices.

The UK Home Affairs committee report describes it as “shockingly easy” to find examples of material intended to stir up hatred against ethnic minorities on all three of the social media platforms it looked at for the report.

It urges social media companies to introduce “clear and well-funded arrangements for proactively identifying and removing illegal content — particularly dangerous terrorist content or material related to online child abuse”, calling for similar co-operation and investment to combat extremist content as the tech giants have already put into collaborating to tackle the spread of child abuse imagery online.

The committee’s investigation, which started in July last year following the murder of a UK MP by a far right extremist, was intended to be more wide-ranging. However, because the work was cut short by the UK government calling an early general election the committee says it has published specific findings on how social media companies are addressing hate crime and illegal content online — having taken evidence for this from Facebook, Google and Twitter.

“It is very clear to us from the evidence we have received that nowhere near enough is being done. The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe. Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law, and to keep their users and others safe,” it writes.

“If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.”

The committee flags multiple examples where it says extremist content was reported to the tech giants but these reports were not acted on adequately — calling out Google, especially, for “weakness and delays” in response to reports it made of illegal neo-Nazi propaganda on YouTube.

It also notes the three companies refused to tell it exactly how many people they employ to moderate content, and exactly how much they spend on content moderation.

The report makes especially uncomfortable reading for Google with the committee directly accusing it of profiting from hatred — arguing it has allowed YouTube to be “a platform from which extremists have generated revenue”, and pointing to the recent spate of advertisers pulling their marketing content from the platform after it was shown being displayed alongside extremist videos. Google responded to the high-profile backlash from advertisers by pulling ads from certain types of content.

“Social media companies rely on their users to report extremist and hateful content for review by moderators. They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense. We believe that it is unacceptable that social media companies are not taking greater responsibility for identifying illegal content themselves,” the committee writes.

“If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.”

The committee suggests social media firms should have to contribute to the cost to the taxpayer of policing their platforms — pointing to how football teams are required to pay for policing in their stadiums and the immediate surrounding areas under UK law as an equivalent model.

It is also calling for social media firms to publish quarterly reports on their safeguarding efforts, including —

  • analysis of the number of reports received on prohibited content
  • how the companies responded to reports
  • what action is being taken to eliminate such content in the future

“It is in everyone’s interest, including the social media companies themselves, to find ways to reduce pernicious and illegal material,” the committee writes. “Transparent performance reports, published regularly, would be an effective method to drive up standards radically and we hope it would also encourage competition between platforms to find innovative solutions to these persistent problems. If they refuse to do so, we recommend that the government consult on requiring them to do so.”

The report, which is replete with pointed adjectives like “shocking”, “shameful”, “irresponsible” and “unacceptable”, follows several critical media reports in the UK which highlighted examples of moderation failures on social media platforms, and showed extremist and paedophilic content continuing to be spread on social media platforms.

Responding to the committee’s report, a YouTube spokesperson told us: “We take this issue very seriously. We’ve recently tightened our advertising policies and enforcement; made algorithmic updates; and are expanding our partnerships with specialist organisations working in this field. We’ll continue to work hard to tackle these challenging and complex problems”.

In a statement, Simon Milner, director of policy at Facebook, added:  “Nothing is more important to us than people’s safety on Facebook. That is why we have quick and easy ways for people to report content, so that we can review, and if necessary remove, it from our platform. We agree with the Committee that there is more we can do to disrupt people wanting to spread hate and extremism online. That’s why we are working closely with partners, including experts at Kings College, London, and at the Institute for Strategic Dialogue, to help us improve the effectiveness of our approach. We look forward to engaging with the new Government and parliament on these important issues after the election.”

Nick Pickles, Twitter’s UK head of public policy, provided this statement: “Our Rules clearly stipulate that we do not tolerate hateful conduct and abuse on Twitter. As well as taking action on accounts when they’re reported to us by users, we’ve significantly expanded the scale of our efforts across a number of key areas. From introducing a range of brand new tools to combat abuse, to expanding and retraining our support teams, we’re moving at pace and tracking our progress in real-time. We’re also investing heavily in our technology in order to remove accounts who deliberately misuse our platform for the sole purpose of abusing or harassing others. It’s important to note this is an ongoing process as we listen to the direct feedback of our users and move quickly in the pursuit of our mission to improve Twitter for everyone.”

The committee says it hopes the report will inform the early decisions of the next government — with the UK general election due to take place on June 8 — and feed into “immediate work” by the three social platforms to be more pro-active about tackling extremist content.

Commenting on the publication of the report yesterday, Home Secretary Amber Rudd told the BBC she expected to see “early and effective action” from the tech giants.

More TechCrunch

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

2 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

2 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo