Facebook’s content moderation rules dubbed ‘alarming’ by child safety charity

Next Story

All the VIP speakers you’ll actually get to meet at #TheEuropas, June 13, London

The Guardian has published details of Facebook’s content moderation guidelines covering controversial issues such as violence, hate speech and self-harm culled from more than 100 internal training manuals, spreadsheets and flowcharts that the newspaper has seen.

The documents set out in black and white some of the contradictory positions Facebook has adopted for dealing with different types of disturbing content as it tries to balance taking down content with holding its preferred line on “free speech.” This goes some way toward explaining why the company continues to run into moderation problems. That and the tiny number of people it employs to review and judge flagged content.

The internal moderation guidelines show, for example, that Facebook allows the sharing of some photos of non-sexual child abuse, such as depictions of bullying, and will only remove or mark up content if there is deemed to be a sadistic or celebratory element.

Facebook is also comfortable with imagery showing animal cruelty — with only content that is deemed “extremely upsetting” to be marked up as disturbing.

And the platform apparently allows users to live stream attempts to self-harm — because it says it “doesn’t want to censor or punish people in distress.”

When it comes to violent content, Facebook’s guidelines allow videos of violent deaths to be shared, while marked as disturbing, as it says they can help create awareness of issues. While certain types of generally violent written statements — such as those advocating violence against women, for example — are allowed to stand as Facebook’s guidelines require what it deems “credible calls for action” in order for violent statements to be removed.

The policies also include guidelines for how to deal with revenge porn. For this type of content to be removed Facebook requires three conditions are fulfilled — including that the moderator can “confirm” a lack of consent via a “vengeful context” or from an independent source, such as a news report.

According to a leaked internal document seen by The Guardian, Facebook had to assess close to 54,000 potential cases of revenge porn in a single month.

Other details from the guidelines show that anyone with more than 100,000 followers is designated a public figure and so denied the protections afforded to private individuals; and that Facebook changed its policy on nudity following the outcry over its decision to remove an iconic Vietnam war photograph depicting a naked child screaming. It now allows for “newsworthy exceptions” under its “terror of war” guidelines. (Although images of child nudity in the context of the Holocaust are not allowed on the site.)

The exposé of internal rules comes at a time when the social media giant is under mounting pressure for the decisions it makes on content moderation.

In April, for example, the German government backed a proposal to levy fines of up to €50 million on social media platforms for failing to remove illegal hate speech promptly. A U.K. parliamentary committee has also this month called on the government to look at imposing fines for content moderation failures. While, earlier this month, an Austrian court ruled Facebook must remove posts deemed to be hate speech — and do so globally, rather than just blocking their visibility locally.

At the same time, Facebook’s live streaming feature has been used to broadcast murders and suicides, with the company apparently unable to preemptively shut off streams.

In the wake of the problems with Facebook Live, earlier this month the company said it would be hiring 3,000 extra moderators — bringing its total headcount for reviewing posts to 7,500. However this remains a drop in the ocean for a service that has close to two billion users who are sharing an aggregate of billions of pieces of content daily.

Asked for a response to Facebook’s moderation guidelines, a spokesperson for the U.K.’s National Society for the Prevention of Cruelty to Children described the rules as “alarming” and called for independent regulation of the platform’s moderation policies — backed up with fines for non-compliance.

Social media companies… need to be independently regulated and fined when they fail to keep children safe.

“This insight into Facebook’s rules on moderating content is alarming to say the least,” the spokesperson told us. “There is much more Facebook can do to protect children on their site. Facebook, and other social media companies, need to be independently regulated and fined when they fail to keep children safe.”

In its own statement responding to The Guardian’s story, Facebook’s Monika Bickert, head of global policy management, said: “Keeping people on Facebook safe is the most important thing we do. We work hard to make Facebook as safe as possible while enabling free speech. This requires a lot of thought into detailed and often difficult questions, and getting it right is something we take very seriously. Mark Zuckerberg recently announced that over the next year, we’ll be adding 3,000 people to our community operations team around the world — on top of the 4,500 we have today — to review the millions of reports we get every week, and improve the process for doing it quickly.”

She also said Facebook is investing in technology to improve its content review process, including looking at how it can do more to automate content review — although it’s currently mostly using automation to assist human content reviewers.

“In addition to investing in more people, we’re also building better tools to keep our community safe,” she said. “We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help.”

CEO Mark Zuckerberg has previously talked about using AI to help parse and moderate content at scale — although he also warned such technology is likely years out.

Facebook is clearly pinning its long-term hopes for the massive content moderation problem it is saddled with on future automation. However the notion that algorithms can intelligently judge such human complexities as when nudity may or may not be appropriate is very much an article of faith on the part of the technoutopianists.

The harder political reality for Facebook is that pressure from the outcry over its current content moderation failures will force it to employ a lot more humans to clean up its act in the short term.

Add to that, as these internal moderation guidelines show, Facebook’s own position in apparently wanting to balance openness/free expression with “safety” is inherently contradictory — and invites exactly the sorts of problems it’s running into with content moderation controversies.

It would be relatively easy for Facebook to ban all imagery showing animal cruelty, for example — but such a position is apparently “too safe” for Facebook. Or rather too limiting of its ambition to be the global platform for sharing. And every video of a kicked dog is, after all, a piece of content for Facebook to monetize. Safe to say, living with that disturbing truth is only going to get more uncomfortable for Facebook.

In its story, The Guardian quotes a content moderation expert, called Sarah T Roberts, who argues that Facebook’s content moderation problem is a result of the vast scale of its “community.” “It’s one thing when you’re a small online community with a group of people who share principles and values, but when you have a large percentage of the world’s population and say ‘share yourself,’ you are going to be in quite a muddle,” she said. “Then when you monetise that practice you are entering a disaster situation.”

Update: Also responding to Facebook’s guidelines, Eve Critchley, head of digital at U.K. mental health charity Mind, said the organization is concerned the platform is not doing enough. “It is important that they recognize their responsibility in responding to high risk content. While it is positive that Facebook has implemented policies for moderators to escalate situations when they are concerned about someone’s safety, we remain concerned that they are not robust enough,” she told us.

“Streaming people’s experience of self-harm or suicide is an extremely sensitive and complex issue,” she added. “We don’t yet know the long-term implications of sharing such material on social media platforms for the public and particularly for vulnerable people who may be struggling with their own mental health. What we do know is that there is lots of evidence showing that graphic depictions of such behavior in the media can be very harmful to viewers and potentially lead to imitative behavior. As such we feel that social media should not provide a platform to broadcast content of people hurting themselves.

“Social media can be used in a positive way and can play a really useful role in a person’s wider support network, but it can also pose risks. We can’t assume that an individual’s community will have the knowledge or understanding necessary, or will be sympathetic in their response. We also fear that the impact on those watching would not only be upsetting but could also be harmful to their own mental health.

“Facebook and other social media sites must urgently explore ways to make their online spaces safe and supportive. We would encourage anyone managing or moderating an online community to signpost users to sources of urgent help, such as Mind, Samaritans or 999 when appropriate.”

Featured Image: Twin Design/Shutterstock