Facebook must remove hate speech posts, Austrian court rules

More bad news for Facebook on the content front: An Austrian court has ruled the company must remove posts deemed to be hate speech, Reuters reports.

The case was brought last year by Austria’s Green party over what it argued were defamatory insults to its leader, Eva Glawischnig, posted to Facebook by a fake account.

The party said the posts were not taken down — despite repeated complaints. And the Commercial Court of Vienna granted a preliminary injunction in December, ruling that Facebook must take down posts deemed to be hateful under Austrian law.

Facebook’s appeal against that ruling now looks to have failed — with the appeals court strengthening the original ruling by saying it must removal both the original post and any verbatim repostings.

The court has also ruled that Facebook must remove content globally rather than just blocking posts locally — expanding the significance of the ruling beyond Austria’s borders.

The Austrian court did not go so far as to suggest Facebook is responsible for weeding out similar hate speech postings. But Reuters reports that the Green party intends to push for that by taking the case to the country’s highest court.

The party is also hoping to force Facebook to identify the holders of fake accounts, and to pay damages for hate speech being spread on its platform, arguing that the ability to win damages would make it easier for individuals to pursue legal action against the company, given the costs involved.

At the time of writing Facebook had not responded to a request for comment. We’ll update this post with any statement.

Facebook has been facing increasing pressure about its response to hate speech complaints in Europe. Last month politicians in Germany voted to back proposals for a law to hit social media firms with fines of up to €50 million if they fail to promptly remove illegal hate speech from their platforms — within 24 hours after a complaint has been made for “obviously criminal content”, and within seven days for other illegal content.

This month UK MPs also called for the government to consider a similar approach there, to try to enforce better standards of content moderation on social platform giants — with signs some US lawmakers might be thinking along similar lines too, vis-a-vis terrorism content.

In the UK in recent times Facebook has faced accusations it has ignored complaints about extremist content and child abuse imagery being shared on its platform — and that it only removed the content when contacted by the media outlets reporting on the problem, suggesting its content moderation systems are struggling to handle the volume of complaints.

Last week Facebook itself announced it would be increasing the headcount of the team it employs to review flagged content — saying 3,000 more staff would be added to the existing 4,500 people it already has working on this. Although for a platform that has close to two billion users a few thousand extra moderators really are a drop in the ocean of content it is generating.

Differences in local laws around hate speech also obviously complicate this process for a US company operating a global platform. And it’s not clear where exactly its new moderators will be based — although Facebook has previously said it employs moderators in Germany and Dublin to review European content.

In recent weeks, some extremely disturbing incidents being broadcast on Facebook Live, its live video streaming platform, have also served to raise the profile and awareness of the risks of moderation failures.

Meanwhile, the perception that fake news is being liberally spread via Facebook continues to pile political pressure on the company.

On this front it emerged today that Facebook has taken some pre-emptive steps to try to defang criticism of misinformation being spread via its platform ahead of the UK General Election next month — announcing it’s purged “tens of thousands” of fake accounts. It has also taken out adverts in British newspapers to warn people to be more skeptical of things they read, as it did in France ahead of presidential elections there.

But despite taking some steps to tweak its News Feed to combat deliberate attempts to game the algorithm, the company has faced criticism that its response has been inadequate — and that it continues to profit from fake news — even as concerns that the power of its platform has been misappropriated by malicious actors intent on trying to subvert the democratic process persist.

And with lawmakers in various countries taking increasing notice of how various types of socially divisive content can be seen being amplified on Facebook’s platform it looks unlikely there will be any near term let up in pressure on the company.

Having a CEO who pens open letters apparently setting out Facebook’s ‘humanitarian manifesto‘ does not appear to have done much to help the company’s reputation as a misinformation muck spreader. tl;dr: Zuckerberg better buckle up for a bumpy ride.