#Gamergate Shows Tech Needs Far Better Algorithms

If #Gamergate teaches us anything — beyond, of course, vastly obvious observations about the toxicity of certain Internet demographics (which is hardly new news) — it’s that algorithms and formulaic behaviour can and are being gamed.

This is especially obvious in this sorry saga (for a detailed breakdown of Gamergate I recommend reading this excellent post; I won’t be rehashing the specific events here) because the players involved are exactly that: gamers. This rage-ful, over-entitled, Internet-connected fraternity of kids share one core skill: playing games. Little wonder, then, they have proved so expert at driving a toxic hellbrew of misogyny into the mainstream media — and all over social media — by gaming popular online channels using a sophisticated playbook of disruption.

Of course they have been able to do this. These individuals’ hobby is examining virtual structures for weaknesses they can exploit with digital weapons in order to progress to the next level.

Gamergate’s players have gamified online media channels and are pwning them hard. Whether it’s via sock puppeting to spread misinformation, or provoking and co-opting existing online subcultures to pressgang an impromptu troll army to flood mainstream social media with targeted abuse, or crafting a carefully worded email campaign to apply collective pressure on corporates to withdraw ad support from their targets. The tactics are myriad but the end result can be summed up with one word: bullying.

What this tells us is that the technology industry absolutely needs better algorithms — to identify, badge and offer users the option to filter this stuff out of their streams — unless everyone thinks it’s okay that online discourse be hijacked by playground bullies.

Existing features of digital platforms such as ‘trending’ content or auto-suggestion prompts have always been blunt instruments; a crude measure of volume, lacking context, which can — as Gamergate amply underlines — be gamed to lift the crudest sentiments and content into mainstream view, even without considering targeted attacks. In the most minor example, start typing a Twitter hashtag for #feminism right now and you’ll find yourself prompted with auto suggestions such as FeminismisAwful and FeministsAreUgly. Ergo Twitter’s algorithms are being co-opted into an orchestrated harassment campaign.

Algorithms this crude are trivial to game. You certainly couldn’t call it hacking because so little skill is needed to reverse engineer the formula and turn what was intended as a helpful feature into targeted and amplified abuse. Well meaning these features may have been, but as their algorithmic rules are exposed the platforms they are attached to become vulnerable to subversion — meaning the features are no longer fit for purpose. And the rules powering them need changing up to keep up with targeted abuse.

Gamergate also underlines that our current Internet services are doing a very poor job of addressing this issue. Indeed, mainstream digital services are actively washing their hands of it.

Twitter shrugs its turquoise shoulders at the problem of online harassment, takes a moment to oil its feathers and then chirps that ‘the tweets must flow’ (unless of course it’s legally compelled to act — by, for instance, frameworks forbidding anti-semitic comments in certain countries. Or, hey, if you’re a celebrity unhappy about the hateful treatment meted out to you by Twitter trolls in the wake of your famed father’s suicide and you threaten to leave the service entirely).

Business self-interest can clearly trump algorithmic hierarchy and lead Twitter to tweak the tweet firehose behind the scenes. Orchestrated online bullying campaigns such as Gamergate are not, however, apparently, worthy of Twitter’s attention. That’s a massive failure.

What links these mainstream online platforms is a failure to take collective responsibility for how easily their services can be misappropriated.

Google also Atlas shrugs responsibility for the hierarchies being generated by its own algorithms — again, unless pressure from the rich and powerful is brought to bear. If you own copyright on content and issue a takedown notice you will absolutely have Google snapping to attention to delist the illegally shared item. In its recently released 2014 document on its anti-piracy efforts Google notes that “millions of copyright requests per week [specifically pertaining to Google search] are processed in less than six hours”.

It also confirms that it tweaks its own auto suggest algorithms when they relate to piracy, noting: “Google has taken steps to prevent terms closely associated with piracy from appearing in Autocomplete and Related Search.”

But if you happen to be an average human dealing with some other human unpleasantness that’s attached itself to you online, whether that’s via bullying jerks or technical quirks, well sorry that’s just how the algorithm works. Google absolutely defends its right to define you by what others choose to click on (and/or what drives the most revenue for its advertising business).

If you are a private individual living in the U.S. and *don’t* like the Google search results associated with your name — results which inevitably work to define your identity online, since they sit there for any curious searcher to conjure with a few keystrokes — well, too bad. Mountain View’s commandment is also that the free speech much flow.

In Europe this specific situation has very recently shifted, thanks to a Spanish man’s lengthy legal battle against Google because its algorithms were continuing to foreground a news story about a 16-year-old housing-related debt he’d long since repaid. The result of that battle, this May, was a European Court of Justice ruling that identified Google and search engines as data controllers and thus requires them to process requests from private individuals who want something de-listed from search results associated with their name. If the information is outdated or irrelevant or erroneous in that private name search it should be de-listed per the request, says the ruling.

Now don’t get me wrong. Google has not gone quiet into this pro-privacy goodnight. It’s lobbied tooth and nail against the ruling, and continues to do so. Only this week Eric Schmidt could be found speaking on the topic in public — reiterating Google’s intention to observe merely the letter of the law, while staying strangely silent about ongoing problems created by specific Google actions that go against the spirit of the ruling — by widening loopholes that result in the opposite effect being achieved (i.e. fresh publicity for private individuals, not the sought for obscurity).

While Schmidt was happy to say he wished Google could find a way to automate the process of reviewing the hundreds of thousands of search de-listing requests it’s so far received (“because we like to automate things — it makes everything more efficient”), he expressed no such love for fixing the philosophical and human-impacting conundrums that are evidently being generated by Google’s algorithms.

Indeed, he personally shrugged off finding a solution for the loopholes Google’s actions are helping to exacerbate — outsourcing responsibility to an outside panel of Google appointed independent experts (which, yes, is an oxymoron). This official sounding public advisory council model is entirely of Google’s own making. And allows the European Court of Justice’s ruling to be publicly chewed over, as if it’s still up for debate, and the entire Google-paid-for roadshow to generate discussion that undermines the law via the perception that it’s an unfixable can of worms. Here, says Google’s Schmidt chairing this Google generated roadshow, is a ruling that has even the philosophers foxed. Go figure!

Schmidt’s responses during the London advisory council meeting to audience questions curious about Google-made decisions were typically shorter and curter than those questions which played to the company playbook by indulging his evident dislike of European privacy law. He ended the four-hour session by mocking the idea that implementing the ruling was possible. (Such an attitudinal imbalance is also in evidence in Google’s written response to Europe’s data protection focused Article 29 Working Party, which earlier probed the company for details on its implementation of the ruling.) Google is not playing a straight bat here because as an entity and a business it prioritizes information over privacy. Its mission statement, after all, is to ‘organize the world’s information’. So its playbook on individual privacy — which can and evidently is being compromised by its algorithms — is to make a sticky wicket even stickier.

None of this is surprising. Google is a business after all. But what is perhaps surprising is that Google is not generally perceived to be in the human misery business — yet there’s no doubt its algorithms can and do cause collateral damage to private individuals’ lives (witness the hundreds of thousands of search de-listing requests it has fielded in Europe over the past few months). Damage which Google de-emphasizes in the interests of its greater mission of organizing all of the things.

And while the notion of causing damage to individuals may instinctively sound like bad business, in fact human misery is the pull the press has used to shift news off its stands for generations. Misery sells papers — and drives clicks. So again it’s no surprise that many media outlets have aligned with Google’s arguments against this privacy ruling, decrying it as ‘censorship’. Really the truth of the matter is a whole lot more messy and complex than that.

Nor is Facebook immune from criticism here. Facebook’s algorithms are in equal thrall to what drives clicks, and equally open to being gamed as a result of being driven by such single-minded formula. If you want to game the Facebook News Feed, a basic ‘hack’ is to add the word “congratulations!” to a status update and watch it float to the top. Or enlist your friends to like your status update en mass to propel it to the top. (Ironically, Facebook has even dabbled in using its algorithms to game its users’ emotions — which demonstrates an awareness of the psychological power of whatever is allowed to be most visible to users of its platform.)

Facebook has also been previously called out for being reluctant to remove misogynistic content from its platform, initially claiming this type of hate speech does not violate its terms of use. (Yet, au contraire, it has also been alacritous quick to yank photos of women breastfeeding as a T&Cs breach.) Bottom line: Facebook’s platform, Facebook’s rules. And these rules skew towards content that appears popular — which means a well orchestrated hate campaign can easily push its algorithmic buttons.

What links these mainstream online platforms is a failure to take collective responsibility for how easily their services can be misappropriated. How the potency of the algorithms that shepherd content around these massive digital landscapes can be manipulated and gamed to intentionally amplify social discord. And, most importantly, how they can be misappropriated to actively harass. Gamergate illustrates how successfully a toxic fringe can exploit mainline digital tools to generate a disproportion level of online disruption on the very mainstream channels that actively disown their views. Congratulations guys, you’ve been pwned!

Toxic viewpoints have no shortage of outlets online. The sprawl of the Internet offers a place for all comers. So there are dedicated channels for haters of all stripes to swap prejudices together. That may not be utopian, and certainly isn’t pleasant, but if you want to find some kind of silver lining to online cesspits you could say well at least the Internet is a level playing field. Except it’s not if the largest playing fields can be so easily gamed.

The big social problems come when sophisticated online armies of algorithm gamers mobilize to intentionally repurpose mainstream platforms to grab far more eyeballs than their views would otherwise get. And the big technology issue here is they are being helped to do that by the priorities of these platforms on which they are running amok. This is about superpowered muck spreading — following a viral marketing playbook — so that it spills out of all proportion, enabled by self-interested commercial services that care most about chasing clicks.

The entire Gamergate populous is by all accounts a minority movement. Even terming it a movement is to give it far more credit than it deserves. It is certainly organized. And orchestrated. Much like a group of online gamers banding together to play Battlefield or Halo. But this is a very small army whose grievances are absolutely not a mainstream concern. The have a right to shout loudly and angrily, sure, as do we all, but in normal circumstances no one but these kids’ parents would hear them. Thanks to our current crop of digital services’ formulaic mechanics, we’re all being forced to listen.

So, amid all the manufactured sound and the fury of #Gamergate, a useful take away is that small, orchestrated online groups can magnify the impact and influence of fringe viewpoints by weaponizing mainstream digital services to repurpose these platforms as propaganda machines. This is not a new thing but the frequency with which it is happening online appears to be growing, and the toxicity being generated is becoming harder to escape as the tactics in play are honed and polished to ever greater effect.

Gamergate activists use online channels to funnel graphic death and rape threats as a weapon to silence feminist critics. But they also repurpose more banal channels — by, for instance, carrying out orchestrated email campaigns that fire carefully worded missives at advertisers to apply commercial pressure against targets (such as hated media outlets). One campaign apparently successfully encouraged Intel to pull advertising from such a site. Again what’s interesting is that a small group of angry people are able to achieve disproportionately large results — with tangible fiscal impacts — by using digital tools as amplifiers.

What the technology industry needs is far smarter algorithms that do more than take a crude measure of volume to determine which content floats to the top. We need mainstream services that build in user support structures to protect against these types of malicious gaming tactics — by making it harder for trolls to mobilize to subvert platforms for their own fringe ends. And we the users need to apply pressure on the tech makers to examine how their tools are being weaponized and come up with fixes that can combat abuse, such as more intelligent filters/blocking options for users to arm themselves against attackers if they so choose.

Trolls will always want to shout loudly, but let’s hope our algorithms aren’t always so dumb as to actively help subvert online social spaces that should be rich, varied and interesting places by transforming them into megaphones for haters.