Who is to blame for algorithmic outrage?

This week we saw a few high-profile (but if we’re honest, low-impact) demonstrations of the ability to target advertising to unsavory groups generated or suggested by major internet companies.

“Jew haters”? There aren’t many, but go for it, Facebook’s ad backend said. Try adding “jews in an oven” to broaden your reach, suggested Google. “Nazi” could engage 18.6 million users, says Twitter.

Upon being alerted of these bafflingly obvious abuses of their systems, the companies’ responses all struck the same notes: “this is against our rules, we have no idea how it happened, and it’s fixed now.”

I can’t be the only one who found this affected concern, monocle-popping shock, and confident deflection unconvincing.

There’s been a lot of talk about combating hate speech on various platforms and countering the very real possibility of algorithmic bias. These things are strongly condemned at every occasion possible, and sometimes, as with the hot-potato hosting of StormFront, there is even the chance to show off a company’s dedication to the concept at small cost.

But then the same companies seem to have been perfectly happy to make money from advertising targeted at groups like “Hitler did nothing wrong.”

How was it none of them, with their thousands of employees and dedicated task forces and diversity officers, saw this coming?
Oh sure, they jumped on it when someone publicly showed how easy it was, even with extreme cases. The ads were shut down with a quickness, they point out — only a few people ever saw them. And it was always against the rules. And it probably wouldn’t have passed human review. And anyway, we fixed it.

Why should we trust them, now or going forward?

This wasn’t some elaborate hack. Someone literally just put words like “nazi” into the ordinary advertising systems of some of the largest digital platforms on Earth — platforms that have repeatedly and specifically stated their dedication to not allowing this exact thing. How was it none of them, with their thousands of employees and dedicated task forces and diversity officers, saw this coming?

A considerable amount of doublethink is necessary here to justify these companies’ ostensibly grand efforts to combat hate speech with their apparent inability to prevent it on their tightly controlled monetization systems. Is it cynical to think that perhaps these companies were unwilling to institute restrictions on the parts that make them money? I would ask why we should give them the benefit of the doubt to begin with.

The reflexive spin is simple enough: But it was the users! How could we have predicted something like this?

Well, if they can’t predict it, maybe they shouldn’t make such conspicuous promises about preventing it. The blame for these incidents lies squarely with the companies themselves. They created the opening by making systems that blindly pull and suggest info from users, and failed to provide protections against elementary abuses of it. And let’s not pretend this is the only such abuse of these systems where they stand to gain — Facebook sold $100K (or 5 million rubles) worth of political ads to a Russian bot net.

If they can’t predict it, maybe they shouldn’t make such conspicuous promises about preventing it.
Don’t worry, though. That’s not allowed! They have protections against this kind of thing!

Don’t buy these feeble excuses. Talk is cheap, especially in tech.

If these platforms want us to believe they are taking this seriously, and want us to believe that they are dedicating real resources to this — not to prevent trivial abuses like the proof-of-concept tests done this week, but deeper, subtler, tactics — they need to show their work.

Google, Facebook, Twitter, and other companies should be conducting their hate speech and free speech campaigns openly and provide every salient detail.

For a start: What are the systems in place to prevent certain abuses? How are sets of offensive terms created and maintained? On what data are moderation algorithms trained? How is feedback incorporated, and how can a decision be appealed? Where is human intervention still required? Is this compatible with the platform’s goals of free speech?

Answers to these and other questions are necessary if we are to understand whether these systems work, whether they’re effective, and where they need improvement. After all, they’re for our benefit — right?

It’s not good enough for these companies to say they’re working on it. If they’re going to trumpet their leadership in and dedication to principles of openness and inclusivity, it is incumbent on them to carry it out with maximum transparency. Show us the data. Then we’ll believe they care.