Social media giants making progress on illegal hate speech takedowns: EC

It’s been a year since the four major social platform players agreed with Europe’s executive body to a voluntary Code of Conduct for removing illegal hate speech within 24 hours of a complaint being received.

A lot has happened on this front since then, with a series of content moderation scandals hitting different platforms and serving to ramp up the regional pressure on the tech giants — including YouTube suffering an advertiser backlash over ads being served up next to extremist content; and Facebook accused of a series of moderation failures, including around child abuse and terrorist content. Not to mention fake news gate.

In Germany the government is now leaning towards legislating to levy fines of up to €50 million on social media platforms if they do not remove illegal hate speech promptly — claiming tech giants have not been doing enough (a UK parliamentary committee also concluded more needs to be done last month, and has urged the government to consider introducing fines as well).

But today the European Commission, at least, is trumpeting what it dubs “significant progress” on illegal hate speech takedowns by Facebook, Twitter, YouTube and Microsoft vs their performance six months prior. Though it also cautions some challenges remain.

Illegal hate speech is defined in EU law as the public incitement to violence or hatred on the basis of certain characteristics, including race, color, religion, descent and national or ethnic origin.

When the four tech firms receive a request to remove content from their online platforms they assess the request against their rules and community guidelines — but also, in Europe where applicable, against national laws on combating racism and xenophobia. So they are making judgements on whether content can be considered illegal online hate speech, and if so they have agreed to take it down — aiming to do so within 24 hours of a report being received.

The EC argues that removing illegal hate speech is not censorship but rather helps defend the right to freedom of expression because threats can prevent people from feeling able to freely express their views.

 

A majority of illegal hate speech is now being removed

The evaluation of the voluntary Code of Conduct, a year in, found that on average in a majority (59 per cent) of cases the tech platforms responded to notifications concerning illegal hate speech by removing the content — which constitutes a more than 2x rise on the removal level recorded (28 per cent) in the first evaluation of the code, six months ago.

It also found an improvement in the amount of notifications reviewed within 24 hours — up from 40 per cent to a majority (51 per cent) in the same six month period.

Although it notes that Facebook is the only company that “fully achieves the target of reviewing the majority of notifications within the day”.

Other areas for improvement the evaluation highlights are discrepancies between when a citizen reports content vs when an organization reports content.

So while it notes some progress on this front, with tech platforms apparently improving how they handle citizen complaints, it also says “some differences persist”, and that overall removal rates remain lower when a notification originates from the public.

The evaluation also points to ongoing discrepancies between tech platforms in their feedback systems for users who report content — with only Facebook sending “systematic feedback” to inform a person how their notification has been assessed.

“Practices differed considerably among the IT companies. Quality of feedback motivating the decision is an area where further progress can be made,” it adds.

The EC is drawing on an evaluation carried out in 24 Member States by NGOs and public bodies for this assessment. Whereas the German government has been basing its assessment of social giants’ performance on hate speech removals on reports from local youth protection organization, jugendschutz.net. (And in March, it used that assessment as a basis for criticizing Facebook and Twitter especially for not doing enough to promptly remove illegal hate speech — and also introduced a draft provision to legislate for fines of up to €50M.)

In this, the second evaluation of the EU Code of Conduct, 2,575 notifications were submitted to the tech firms taking part in the code — a 4x increase vs the first monitoring exercise, in December 2016. While Facebook received the largest amount of notifications (1,273 cases), followed by YouTube (658 cases) and Twitter (644 cases). Microsoft did not receive any.

Making some general observations, the evaluation said that within the last year the four platform giants have strengthened their reporting systems and made it easier to report hate speech.

They have also trained staff and — in the EC’s words — “increased their cooperation with civil society”.

The EC further suggests the Code of Conduct has helped tackle the spread of illegal hate speech in the region by strengthening and enlarging the tech firms’ network of “trusted flaggers” throughout Europe.

And it argues that via increased co-operation with civil society organizations the tech platforms have gained “a higher quality of notifications”, which in turn is yielding “more effective handling times and better results in terms of reactions to the notifications”.

Vĕra Jourová, the European Union commissioner for justice, consumers and gender equality, described the results of the one-year evaluation as “encouraging”.

“This is an important step in the right direction and shows that a self-regulatory approach can work, if all actors do their part,” she said in a statement.

“At the same time, companies carry a great responsibility and need to make further progress to deliver on all the commitments. For me, it is also important that the IT companies provide better feedback to those who notified cases of illegal hate speech content,” she added.

In another supporting statement, Andrus Ansip, the EC’s VP for the digital single market, added: “Working closely with the private sector and civil society to fight illegal hate speech brings results, and we will redouble our joint efforts.

“We are now working to ensure closer coordination between the different initiatives and forums that we have launched with online platforms. We will also bring more clarity to notice and action procedures to remove illegal content in an efficient way — while preserving freedom of speech, which is essential.”

Last month Facebook announced it would be beefing up the size of its team of content reviewers by 3,000 additional staff — bringing the total headcount to 7,500. Though it’s been dealing with a string of content moderation scandals, not just in Europe — such as its Facebook Live being used to broadcast murder and suicide.

Commenting in a statement on the Code of Conduct evaluation today, Richard Allan, VP public policy EMEA for Facebook, said: “We believe that the best solutions to the challenge of hate speech on the Internet are found when governments, civil society and industry work together.

“The results of the independent tests released by the European Commission today show that our partnership is having a significant positive impact for people in the EU. We have made many improvements to our policies and processes over the last year and now see that more illegal hate speech is being removed more quickly than ever before.

“We are determined to keep doing better and live up to the high standards that people rightly expect of us. We recently announced that we would be adding another 3,000 staff to our global team of reviewers. We are also looking at how we can use the latest technology to help our review teams identify and prioritise high risk content.”

In a statement, Karen White, Twitter’s head of public policy in Europe, added: “At Twitter, we strive to reach the right balance between showing all sides of what’s happening and tackling hateful conduct. Over the past six months, we’ve introduced a host of new tools and features to improve Twitter for everyone. We’ve also improved the in-app reporting process for our users and we continue to review and iterate on our policies and their enforcement. Our work will never be ‘done’.

“As the world’s conversation evolves, so too does the challenge we face. We will continue to operate at pace, while meeting our core principles around freedom of expression, and defending and respecting the voices of those who use our service worldwide.”

Twitter is also stepping up its efforts to inform users of existing tools they can use to manage which content they do and don’t see on its platform (or “manage your experience” as it puts it) — and is currently sending the below email notification to users in Europe to flag up what it describes as “three key tools for staying safe” — namely:

Mute

Rather than see content in Tweets you’d like to avoid, you can manage what you see in your timeline and notifications. Mute accounts, words, and conversations.

 

Notification Filters

Get an extra level of control by filtering the types of accounts you see in your notifications. You can choose to stop seeing notifications from certain kinds of accounts.

 

Block

You can instantly block any account. When you do, that account holder can’t see your Tweets or send you a message while they’re logged in.