Twitter claims tech wins in quashing terror tweets

In its latest Transparency report, which covers requests it’s received from governments pertaining to content on its platform, Twitter has reported a big decline in the proportion of pro-terrorism accounts being reported over the past six months, saying this is down 80 per cent since its last report, as well as reporting a drop in the number of accounts it removed for terrorism-related content during this period.

Twitter claims pro-terrorism account reports have shrunk by a fifth in the past six months.

It also reports that the vast majority (95 per cent) of account suspensions pertaining to the promotion of terrorism resulted from use of its in-house tech tools, up from 74 per cent on the prior six-month report period — with government requests accounting for less than one per cent of pro-terror account suspensions.

Along with other social media platform giants, Twitter is facing increased political pressure to promptly eject terrorist content and hate speech from its platform — especially in Europe where new laws have been proposed in some countries that could see governments introducing a regime of financial penalties attached to failures of performance for social media content takedown as a stick to encourage faster removals of illegal content.

~300,000 accounts nixed for terrorism in six months

Between January and June 2017, the six-month period covered by this, Twitter’s 11th Transparency Report, the tech firm said it removed a total of 299,649 pro-terrorism accounts — surfaced by both reports from governments and its own in-house tech (though the lion’s share of identifications were generated by its tech tools).

It says this represents a 20 per cent drop in terrorism-promoting Twitter accounts since the last reporting period, of July 1, 2016 through December 31, 2016.

Which — coupled with the 80 per cent drop in government agencies reporting pro-terror Twitter accounts — suggests the company is at least managing to squeeze terrorist activity on its platform, given it seems unlikely there’s been such a large reduction in globally active terrorists online over the same period. (Even as there are still hundreds of thousands of pro-terrorism Twitter accounts being created every half a year.)

The company further emphasizes it killed a majority of the pro-terrorism accounts set up on its platform before they could post anything: “Notably, 75% of these accounts were suspended before posting their first Tweet,” it writes.

Which seems a big win. And a figure to watch to see if Twitter is able to further increase the proportion of non-tweeter terrorism account suspensions in its next Transparency Report.

A spokeswoman for Twitter confirmed to us that this is the first time it’s published data on “that particular metric” when we asked whether there has been a rise in Twitter being able to cut-off terrorist accounts before they’ve sent a single tweet.

“In the last six months we have seen our internal, spam-fighting tools play an increasingly valuable role in helping us get terrorist content off of Twitter,” she added. “Our anti-spam tools are getting faster, more efficient, and smarter in how we take down accounts that violate our TOS.”

The figure for total suspensions of pro-terrorism Twitter accounts is now approaching 1M over two years. (To be exact, the company reports 935,897 pro-terrorism account suspensions between August 1, 2015 through June 30, 2017.)

Asked for more details about the changes it’s made to its anti-terrorism tools — to apparently deliver better results — the spokeswoman told us: “We are reluctant to share details of how these tools work as we do not want to provide information that could be used to try to avoid detection.”

“We can say that these tools enable us to take signals from accounts found to be in violation of our TOS and to work to continuously strengthen and refine the combinations of signals that can accurately surface accounts that may be similar,” she added.

Another Twitter spokesperson also pointed to a few pieces of academic research which suggest the Islamic State terror group has shifted its social media strategy from relying on Twitter’s platform to distribute violent propaganda to utilizing the messaging platform Telegram (which lets users broadcast missives to large groups).

The spokesman also made a point of flagging how the latter has been called out for a lack of co-operation by security agencies. So the company is clearly hoping to shift the big red finger of terrorism propaganda blame onto the rival Telegram messaging platform.

Abusive behavior triggered 98% of gov’t TOS reports

In this 11th edition of its Transparency Report Twitter has also expanded the categories it breaks out in the government TOS reports section (which it added in its 10th report) to now show a break down of four categories of these types of reports — namely: Abusive Behavior, Copyright, Promotion of Terrorism, and Trademark reports.

This shows that the vast majority of reports Twitter is receiving from governments relate to abusive behavior on Twitter — which it says accounted for 98 per cent of global government TOS reports it received — with pro-terrorism content a very, very distant second (accounting for around 2 per cent of the reports).

This is interesting as it underlines the huge difference in how Twitter is approaching terrorism-related content vs abusive behavior — with the vast majority (92 per cent) of accounts reported for terrorism going on to be removed by Twitter from its platform vs just 13 per cent (as Twitter reports it) of those reported for abusive behavior actually being suspended.

 

In the report Twitter says the fact that the vast majority of abuse-related reports resulted in no content being removed is down to “a variety of reasons” —

… such as the reporter failing to identify content on Twitter or our investigation finding that the reported content did not violate our Terms. As we take an objective approach to processing global Terms of Service reports, the fact that the reporters in these cases happened to be government officials had no bearing on whether any action was taken under our Rules.

You could argue that terrorism is a rather easier category of content to identify than ‘abusive behavior’, with the latter representing something of a subjective spectrum when you’re talking in terms of a package of content delivered in tweet form (and of course depending on how high you dial up your ‘free speech’ setting); and likely a much more subjective spectrum vs pro-terrorism content specifically.

Though there’s no doubt Twitter is still the target of fierce criticism, including by many users, for how its platform continues to enable, for example, misogynist troll armies to pile in and harass women en masse. And such co-ordinated harassment clearly undermines the free speech rights of those being targeted. (Though Twitter has claimed to be stepping up its anti-abuse measures and tools.)

The company also continues to be criticized for racist speech on its platform. Even though its TOC expressively forbid “hateful conduct” including “on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease”.

Just this August the company was called out — in this instance by a UK parliamentary committee — for failing to act on abusive tweets, including failing to taken down graphic images of suspected rape and abuse which, its critics argue, clearly violate its own community standards — which forbid inciting or engaging in “targeted abuse or harassment of others”.

in that instance the Guardian reported that the committee chair wrote to Twitter asking it to explain its methodology and timescales for removing graphic pictures and sexually explicit messages, and also asking it to provide details of the average time taken to investigate reports and take down tweets, as well as what action is being taken to speed up removals.

The MP also sought information on how many staff Twitter employs actively looking for abusive content, and for more detail on its policy on the removal of tweets and suspension of accounts.

Which are exactly the sorts of questions Twitter’s Transparency Report does not answer. Although it is at least now breaking out abusive behavior as a government TOS reports category and revealing it to be the overwhelmingly number one issue being reported by government agencies.

We can’t compare this with prior Transparency Reports as Twitter was not previously breaking government reports into specific categories. But its inclusion and prominence now does suggest politicians are feeling under pressure to take action to try to curb abuse taking place on Twitter.

Of the government-reported abusive content that Twitter did remove, the company reports the largest proportion was related to harassment and “hateful conduct” — stating that: “The majority was removed for “violating rules under these areas: harassment (37%), hateful conduct (35%), and impersonation (13%)”.

“The remainder of the violating content fell within other areas of our prohibitions against abusive behavior as set forth in the Twitter Rules,” it adds.

Asked if it could disclose the geographical locations where it receives the most government reports relating to abusive behavior on its platform, the Twitter spokeswoman told us it cannot provide “that level of granularity this time”.

Nor, she told us, is it able to disclose the geographies where it did take action on the minority of government reports on abusive behavior and remove accounts.

The company does not reveal how many reports of abusive behavior it receives generally, from all users, i.e. rather than just government-related reports — per this report. But now that it’s breaking out government agency reports of abusive behavior it should at least be possible to see how political pressure on Twitter over this issue rises (or falls) going forward.

Elsewhere in the Transparency Report, Twitter notes it has expanded its U.S. country report, adding a breakdown of California state information requests at the county level — and says it has plans to introduce this section to other states in future to help users “get a better idea of how frequently their local authorities seek user account information”.

Over the report period, it also says it received 6 per cent more global government requests for account information which affected 3% fewer accounts than in the previous period. It further notes requests originated from four new countries: Nepal, Paraguay, Panama, and Uruguay.

“In addition, we received approximately 10% more global legal requests to remove content impacting roughly 12% more accounts compared to the previous reporting period. These included requests from nine new countries: Bahrain, China, Croatia, Finland, Nepal, Paraguay, Poland, Qatar, Ukraine, and Uruguay,” it adds.