Scammers peddling Islamophobic clickbait is business as usual at Facebook

A network of scammers used a ring of established right-wing Facebook pages to stoke Islamophobia and make a quick buck in the process, a new report from the Guardian reveals. But it’s less a vast international conspiracy and more simply that Facebook is unable to police its platform to prevent even the most elementary scams — with serious consequences.

The Guardian’s multi-part report depicts the events like a scheme of grand proportions executed for the express purpose of harassing Representatives Ilhan Omar (D-MI), Rashida Tlaib (D-MN) and other prominent Muslims. But the facts it uncovered point towards this being a run-of-the-mill money-making operation that used tawdry, hateful clickbait and evaded Facebook’s apparently negligible protections against this kind of thing.

The scam basically went like this: an administrator of a popular right-wing Facebook page would get a message from a person claiming to share their values that asked if they could be made an editor. Once granted access, this person would publish clickbait stories — frequently targeting Muslims, and often Rep. Omar, since they reliably led to high engagement. The stories appeared on a handful of ad-saturated websites that were presumably owned by the scammers.

That appears to be the extent of the vast conspiracy, or at least its operations — duping credulous conservatives into clicking through to an ad farm.

Its human cost, however, whether incidental or deliberate, is something else entirely. Rep. Omar is already the target of many coordinated attacks, some from self-proclaimed patriots within this country; just last month, an Islamophobic Trump supporter pleaded guilty in federal court to making death threats against her.

Social media is asymmetric warfare in that a single person can be the focal point for the firepower — figurative but often with the threat of literal — of thousands or millions. That a Member of Congress can be the target of such continuous abuse makes one question the utility of the platform on which that abuse is enabled.

In a searing statement offered to the Guardian, Rep. Omar took Facebook to task:

I’ve said it before and I’ll say it again: Facebook’s complacency is a threat to our democracy. It has become clear that they do not take seriously the degree to which they provide a platform for white nationalist hate and dangerous misinformation in this country and around the world. And there is a clear reason for this: they profit off it. I believe their inaction is a grave threat to people’s lives, to our democracy and to democracy around the world.

Despite the scale of its effect on Rep. Omar and other targets, it’s possible and even likely that this entire thing was carried out by a handful of people. The operation was based in Israel, the report repeatedly mentions, but it isn’t a room of state-sponsored hackers feverishly tapping their keyboards — the guy they tracked down is a jewelry retailer and amateur SEO hustler living in a suburb of Tel Aviv who answered the door in sweatpants and nonchalantly denied all involvement.

The funny thing is that, in a way, this does amount to a vast international conspiracy. On one hand, it’s a guy in sweatpants worming his way into some trashy Facebook pages and mass-posting links to his bunk news sites. But on the other, it’s a coordinated effort to promote Islamophobic, right-wing content that produced millions of interactions and doubtless further fanned the flames of hatred.

Why not both? After all, they represent different ways that Facebook fails as a platform to protect its users. “We don’t allow people to misrepresent themselves on Facebook,” the company wrote in a statement to the Guardian. Obviously, that isn’t true. Or rather, perhaps it’s true in the way that running at the pool isn’t allowed. People just do it anyway, because the lifeguards and Facebook don’t do their job.

In the case of the sweatpants-wearing man, there was a failure to detect what must have been a fairly obvious click farm coordinated by a handful of what are likely sockpuppet accounts. This is the kind of behavior that the company has been combating for years, and although it isn’t an exact analogue to what foreign actors are attempting in regards to election manipulation, it’s a close cousin.

Though many of these hateful, scammy posts were being put up simultaneously on more than 20 politically-oriented groups, and all led to the same set of websites, nothing in Facebook’s vaunted automated defense mechanism flagged them. And remember, this wasn’t a one-time thing — it happened for months on end, involving thousands of posts.

If this isn’t “coordinated inauthentic behavior,” what is? And if Facebook has to rely on reporters — over and over, it must be said — to find scammers living and working on its own roof, why should we believe it when it says it “takes this very seriously?”

It’s disheartening and entirely consistent with widespread discontent with Facebook’s inability to police itself that a violation of its rules this obvious could persist for so long. For years Facebook has talked about cracking down on exactly this type of behavior: fake accounts, fake news, fake sites. To be sure we see the occasional blog post about how a network of 200 fake users was taken down. And it may be said that the present network is only obvious in retrospect.

But consider that not only does Facebook have access to enormous amounts of non-public information that should provide the insight it needs to identify and combat this type of abuse, it has spent years — or so it claims — developing tools for exactly this purpose. It would be more disappointing had we not seen failures like this before.

The case of the coordinated hatred is the more depressing one. It seems that the Islamophobia and other vitriol was only being used because that’s what drives traffic. Was the purveyor of this content ideologically driven? It’s certainly possible, but it’s also possible that they simply knew what clicks.

Bigotry is a powerful motivator, as has been proven over and over again. And Facebook is foremost in allowing such content to flourish and reach those vulnerable to its siren call, or dog whistle, however the case may be, seemingly because it is the kind of thing that drives engagement.

Of course Facebook would deny that it is simply allowing bad actors and objectionable content to live on its platform in order to keep engagement levels high. And yet, it declined to disallow even demonstrably false political ads. It promises better moderation, and yet it won’t hire the full-time workforce necessary to do so. It wants to surface “true” news and demote “false” news, but its deals with fact-checkers fall apart left and right.

Caught in the middle of these promises and the half-hearted attempts to keep them are the users, who, the poor things, don’t always have their own best interests in mind. They create private groups to escape moderation, or tailor their posts carefully to fit neatly in the “allowed” category while remaining obviously hateful or threatening. And so a platform that was once useful in a very real yet somewhat narrow sense has become a minefield of conflict and sabotage.

That one person or a handful of people can, essentially as a byproduct of a tired scam, produce a factory for vile and divisive content, driving millions of interactions and comments that make the world a worse place, is shameful. And it seems to be an inextinguishable part of the Facebook experience — perhaps even its business model.