Europe keeps up the pressure on social media over illegal content takedowns

The European Union’s executive body is continuing to pressure social media firms to get better at removing illegal content from their platforms before it has a chance to spread further online.

Currently there is a voluntary Code of Conduct on countering illegal online hate speech across the European Union. But the Commission has previously indicated it could seek to legislate if it feels companies aren’t doing enough.

After attending a meeting on the topic today, Andrus Ansip, the European Commissioner for Digital Single Market, tweeted to say the main areas tech firms need to be addressing are that “takedown should be fast, reliable, effective; pro-activity to detect, remove and disable content using automatic detection and filtering; adequate safeguards and counter notice”.

While the notion of tech giants effectively removing illegal content might be hard to object to in principle, such a laundry list of requirements underlines the complexities involved in pushing commercial businesses to execute context-based speech policing decisions in a hurry.

For example, a new social media hate speech law in Germany, which as of this month is being actively enforced, has already draw criticism and calls for its abolition after Twitter blocked a satirical magazine that had parodied anti-Muslim comments made by the far-right Alternative for Germany political party.

Another problematic aspect to the Commission’s push is it appears keen to bundle up a very wide spectrum of ‘illegal content’ into the same response category — apparently aiming to conflate issues as diverse as hate speech, terrorism, child exploitation and copyrighted content.

In September the EC put out a set of “guidelines and principles” which it said were aimed at pushing tech firms to be more pro-active about takedowns of illegal content, and specifically urging them to build tools to automate flagging and re-uploading of such content. But the measures were quickly criticized for being overly vague and posing a risk to freedom of expression online.

It’s not clear what kind of “adequate safeguards” Ansip is implying could be baked into the auto-detection and filtering systems the EC wants (we’ve asked and will update this story with any response). But there’s a clear risk that an over-emphasis on pushing tech giants to automate takedowns could result in censorship of controversial content on mainstream platforms.

There’s no public sign the Commission has picked up on these specific criticisms, with its latest missive flagging up both “violent and extremist content” but also “breaches of intellectual property rights” as targets.

Last fall the Commission said it would monitor tech giants’ progress vis-a-vis content takedowns over the next six months to decide whether to take additional measures — such a drafting legislation. Though it has also previously lauded progress being made.

In a statement yesterday, ahead of today’s meeting, the EC kept up the pressure on tech firms — calling for “more efforts and progress”:

The Commission is counting on online platforms to step up and speed up their efforts to tackle these threats quickly and comprehensively, including closer cooperation with national and enforcement authorities, increased sharing of know-how between online players and further action against the reappearance of illegal content.

We will continue to promote cooperation with social media companies to detect and remove terrorist and other illegal content online, and if necessary, propose legislation to complement the existing regulatory framework.

In the face of rising political pressure and a series of content-related scandals, both Google and Facebook last year announced they would be beefing up their content moderation teams by thousands of extra staff apiece.