Policing Hate Speech Is Harder Than Nipples

No automated system can identify what will offend people. What some humans find disgusting, others find controversial, and others still find funny. Computers just don’t understand.

That caused trouble for Facebook this week when activism groups got advertisers to boycott after the social network failed to suspend accounts accused of publishing hate speech and encouraging violence against women via image memes — photos of injured women with overlaid text laughing about it. Some of the images Women Action Media denounced truly did call for violence against women, and could understandably be pulled. Other less direct references to violence could be interpreted as obscene jokes.

Facebook had reviewed some of the accused offenders and allowed some to persist. However, the loss of ad dollars seems to have spurred it to action as it’s now vowed to do a better job responding to complaints.

Now let’s be clear, violence against women or anyone is wrong. Encouraging it is in some cases is illegal. Joking about it is insensitive. But just because this kind of content appears on Facebook doesn’t mean it promotes violence. There is no misogynist conspiracy. There is a technological deficiency, a human deficiency.

Facebook first line of defense is a set of objectionable content detection algorithms. They can pick out the shape of a nipple or penis, and automatically pull down the image. But they’re not perfect. They miss things and flag innocent images accidentally. Most critically in the current situation, they can’t decipher that a string of words and an image combine to mean something horrible.

That’s why the second line of defense is Facebook’s users. Content flagged by too many people too quickly can be taken down automatically and reviewed later. This system has its flaws too, as a group of trolls working in concert can bring down innocent content, as has happened to Pages promoting women’s rights and other just causes. When content is flagged, it’s sent to Facebook’s Site Integrity team.

I’ve had friends who worked on this team. It’s a tough, draining job. They are the ones who have to look at the threats of violence, the revenge porn, the racism, bigotry, and hatred. In the early days when Facebook’s own employees reviewed this content manually, I heard the company had to frequently rotate people through the team because the nightmarish content would drive staffer to depression if they did it too long. In one case, I heard about a man who each weekend would create multiple fake accounts bearing his ex-girlfriends name and post naked photos he had of her. Each week the Site Integrity team would work with her to take them down. It sounded like an awful job.

Now Facebook outsources much of this work overseas, but the edge cases bubble up to its in-house staff. They have to make tough calls about where to draw the line. A blurry, jagged, subjective line that some will say is repressive and others will say promotes hatred. Humanity has had to draw this line since we learned to express ourselves. And there will never be an exactly right answer to where it belongs.

That doesn’t mean Facebook handled this situation properly. Each of us can judge if it drew the line wrong, but what appeared to move the line that I found so unsettling. Though Facebook didn’t mention advertisers, and was already working on many of the reforms mentioned in its apology, it seems like the advertisers boycott made it spring into action. Facebook needs a firm stance on hate speech independent of its finances. That won’t be easy, and exceptions will have to be made from time to time.

This is an issue of freedom of speech. We’re right to demand sensitivity and ever improving systems for delivering it. We must also remember, though, that the Internet’s ability to connect a diversity of opinions is one reason it is so powerful. Moving to block someone else’s should be used with great discretion, not just when we disagree.