Study: Russia-linked fake Twitter accounts sought to spread terrorist-related social division in the UK

A study by UK academics looking at how fake social media accounts were used to spread socially divisive messages in the wake of a spate of domestic terrorists attacks this year has warned that the problem of hostile interference in public debate is greater than previously thought.

The researchers, who are from Cardiff University’s Crime and Security Research Institute, go on to assert that the weaponizing of social media to exacerbate societal division requires “a more sophisticated ‘post-event prevent’ stream to counter-terrorism policy”.

“Terrorist attacks are designed as forms of communicative violence that send a message to ‘terrorise, polarise and mobilise’ different segments of the public audience. These kinds of public impacts are increasingly shaped by social media communications, reflecting the speed and scale with which such platforms can make information ‘travel’,” they write.

“Importantly, what happens in the aftermath of such events has been relatively neglected by research and policy-development.”

The researchers say they collected a dataset of ~30 million datapoints from various social media platforms. But in their report they zero in on Twitter, flagging systematic use of Russian linked sock-puppet accounts which amplified the public impacts of four terrorist attacks that took place in the UK this year — by spreading ‘framing and blaming’ messaging around the attacks at Westminster Bridge, Manchester Arena, London Bridge and Finsbury Park.

They highlight eight accounts — out of at least 47 they say they identified as used to influence and interfere with public debate following the attacks — that were “especially active”, and which posted at least 427 tweets across the four attacks that were retweeted in excess of 153,000 times. Though they only directly name three of them: @TEN_GOP (a right-wing, anti-Islam account); @Crystal1Jonson (a pro-civil rights account); and @SouthLoneStar (an anti-immigration account) — all of which have previously been shuttered by Twitter. (TechCrunch understands the full list of accounts the researchers identified as Russia-linked has not currently been shared with Twitter.)

Their analysis found that the controllers of the sock puppets were successful at getting information to ‘travel’ by building false accounts around personal identities, clear ideological standpoints and highly opinionated views, and by targeting their messaging at sympathetic ‘thought communities’ aligned with the views they were espousing, and also at celebrities and political figures with large follower bases in order to “‘boost’ their ‘signal’” — “The purpose being to try and stir and amplify the emotions of these groups and those who follow them, who are already ideologically ‘primed’ for such messages to resonate.”

The researchers say they derived the identities of the 47 Russian accounts from several open source information datasets — including releases via the US Congress investigations pertaining to the spread of disinformation around the 2016 US presidential election; and the Russian magazine РБК — although there’s no detailed explanation of their research methodology in their four-page policy brief.

They claim to have also identified around 20 additional accounts which they say possess “similar ‘signature profiles’” to the known sock puppets — but which have not been publicly identified as linked to the Russian troll farm, the Internet Research Agency, or similar Russian-linked units.

While they say a number of the accounts they linked to Russia were established “relatively recently”, others had been in existence for a longer period — with the first appearing to have been set up in 2011, and another cluster in the later part of 2014/early 2015.

The “quality of mimicry” being used by those behind the false accounts makes them “sometimes very convincing and hard to differentiate from the ‘real’ thing”, they go on to assert, further noting: “This is an important aspect of the information dynamics overall, inasmuch as it is not just the spoof accounts pumping out divisive and ideologically freighted communications, they are also engaged in seeking to nudge the impacts and amplify the effects of more genuine messengers.”

‘Genuine messengers’ such as a Nigel Farage — aka one of the UK politicians directly cited in the report as having had messages addressed to him by the fake accounts in the hopes he would then apply Twitter’s retweet function to amplify the divisive messaging. (Farage was leader of UKIP, one of the political parties that campaigned for Brexit and against immigration.)

Far right groups have also used the same technique to spread their own anti-immigration messaging via the medium of president Trump’s tweets — in one recent instance earning the president a rebuke from the UK’s Prime Minister, Theresa May.

Last month May also publicly accused Russia of using social media to “weaponize information” and spread socially divisive fake news on social media, underscoring how the issue has shot to the top of the political agenda this year.

“The involvement of overseas agents in shaping the public impacts of terrorist attacks is more complex and troubling than the journalistic coverage of this story has implied,” the researchers write in their assessment of the topic.

They go on to claim there’s evidence for “interventions” involving a greater volume of fake accounts than has been documented thus far; spanning four of the UK terror attacks that took place earlier this year; that measures were targeted to influence opinions and actions simultaneously across multiple positions on the ideological spectrum; and that activities were not just being engaged by Russian units — but with European and North American right-wing groups also involved.

They note, for example, having found “multiple examples” of spoof accounts trying to “propagate and project very different interpretations of the same events” which were “consistent with their particular assumed identities” — citing how a photo of a Muslim woman walking past the scene of the Westminster bridge attack was appropriate by the fake accounts and used to drive views on either side of the political spectrum:

The use of these accounts as ‘sock puppets’ was perhaps one of the most intriguing aspects of the techniques of influence on display. This involved two of the spoof accounts commenting on the same elements of the terrorist attacks, during roughly the same points in time, adopting opposing standpoints. For example, there was an infamous image of a Muslim woman on Westminster Bridge walking past a victim being treated, apparently ignoring them. This became an internet meme propagated by multiple far-right groups and individuals, with about 7,000 variations of it according to our dataset. In response to which the far right aligned @Ten_GOP tweeted: She is being judged for her own actions & lack of sympathy. Would you just walk by? Or offer help? Whereas, @ Crystal1Johnson’s narrative was: so this is how a world with glasses of hate look like – poor woman, being judged only by her clothes.

The study authors do caveat that as independent researchers it is difficult for them to guarantee ‘beyond reasonable doubt’ that the accounts they identified were Russian-linked fakes — not least because they’ve been deleted (and the study is based off of analysis of digital traceries left from online interactions).

But they also assert that given the difficulties of identifying such sophisticated fakes, there are likely more of them than they were able to spot. For this study, for example, they note that the fake accounts were more likely to have been concerned with American affairs, rather than British or European issues — suggesting more fakes could have flown under the radar because more attention has been directed at trying to identify fake accounts targeting US issues.

A Twitter spokesman declined to comment directly on the research but the company has previously sought to challenge external researchers’ attempts to quantify how information is diffused and amplified on its platform by arguing they do not have the full picture of how Twitter users are exposed to tweets and thus aren’t well positioned to quantify the impact of propaganda-spreading bots.

Specifically it says that safe search and quality filters can erode the discoverability of automated content — and claims these filters are enabled for the vast majority of its users.

Last month, for example, Twitter sought to play down another study that claimed to have found Russian linked accounts sent 45,000 Brexit related tweets in the 48 hours around the UK’s EU in/out referendum vote last year.

The UK’s Electoral Commission is currently looking at whether existing campaign spending rules were broken via activity on digital platforms during the Brexit vote. While a UK parliamentary committee is also running a wider enquiry aiming to articulate the impact of fake news.

Twitter has since provided UK authorities with information on Russian linked accounts that bought paid ads related to Brexit — though not apparently with a fuller analysis of all tweets sent by Russian-linked accounts. Actual paid ads are clearly the tip of the iceberg when there’s no financial barrier to entry to setting up as many fake accounts as you like to tweet out propaganda.

As regards this study, Twitter also argues that researchers with only access to public data are not well positioned to definitively identify sophisticated state-run intelligence agency activity that’s trying to blend in with everyday social networking.

Though the study authors’ view on the challenge of unmasking such skillful sock puppets is they are likely underestimating the presence of hostile foreign agents, rather than overblowing it.

Twitter also provided us with some data on the total number of tweets about three of the attacks in the 24 hours afterwards — saying that for the Westminster attack there were more than 600k tweets; for Manchester there were more than 3.7M; and for the London Bridge attack there were more than 2.6M — and asserting that the intentionally divisive tweets identified in the research represent a tiny fraction (less than 0.01%) of the total tweets sent in the 24 hour period following each attack.

Although the key issue here is influence, not quantity of propaganda per se — and quantifying how opinions might have been skewed by fake accounts is a lot trickier.

But growing awareness of hostile foreign information manipulation taking place on mainstream tech platforms is not likely to be a topic most politicians would be prepared to ignore.

In related news, Twitter today said it will begin enforcing new rules around how it handles hateful conduct and abusive behavior on its platform — as it seeks to grapple with a growing backlash from users angry at its response to harassment and hate speech.