Twitter Makes Tweaks To Limit Abuse And Abusers

Twitter has today announced some product and policy changes aimed at better tackling violent threats and abuse on its platform.

The problem of the mainstream social network being misappropriated as an amplification medium for minority hate groups to — for instance — spread jihadist terrorist sentiments and propaganda, or harass and threaten women with messages of violent misogyny, has risen up the political agenda in recent times, especially given the spread of ISIS in the Middle East.

Away from terrorism, the GamerGate saga also rebounded on Twitter as women involved in the games industry found its platform turned into a conduit for violently misogynistic and sustained online harassment that has included death and rape threats, graphic material and doxxing.

And that’s by no means an exception. Various public figures (female and male) have suffered episodes of abuse via the platform in recent years, such as British journalist and feminist activist Caroline Criado-Perez (following her 2013 campaign to have a woman depicted on British bank notes). Or the daughter of Robin Williams, following her father’s suicide. The sight of a public figure taking a Twitter hiatus after an abusive episode is a regular occurrence.

Just this month comedian Sue Perkins tweeted she was off Twitter for “a bit” after being targeted with violent death threats following rumors she might take over as presenter of a BBC TV show about cars.

Back in February a leaked memo from Twitter CEO Dick Costolo to staff indicated he was both aware of Twitter’s abuse problem and intending (finally) to prioritizing tackling it. “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years. It’s no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day,” he wrote.

To use the vernacular: no shit.

Today Twitter has laid out its latest strategies for dealing with abusive trolling. Some of which are still evidently a work in progress. Indeed Twitter suggests the entire precarious business of balancing its long-stated desire to champion free speech with the need to avoid becoming a dumb conduit for online bullying (and worse) is something that’s likely to result in continued tweaks in this area.

“As the ultimate goal is to ensure that Twitter is a safe place for the widest possible range of perspectives, we will continue to evaluate and update our approach in this critical arena,” notes Shreyas Doshi, Twitter’s director of product management.

One key change Twitter has announced today is also being described as a test, so is clearly being actively evaluated for effectiveness. This is an actual product change, not just a policy tweak (though it’s doing some tweaking there too).

“We have begun to test a product feature to help us identify suspected abusive Tweets and limit their reach,” writes Doshi, describing what amounts to a pre-emptive filtering of tweets — in a bid to identify and limit the spread of abusive content on the platform.

So, in other words, to cut abusive trolls off at the moment of tweeting, rather than after targeted abuse has hit home and caused the intended distress. This is a pretty big step for a company that has been so bullish in pronouncing the ‘tweets must flow’ in the past.

“This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive,” adds Doshi.

“It will not affect your ability to see content that you’ve explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content. This feature does not take into account whether the content posted or followed by a user is controversial or unpopular.”

The Guardian reports the new (let’s say beta) abuse filter as being based on an optional quality filter available to verified Twitter users — so the ‘tailored’ notifications option highlighted below — albeit not as strict as the verified filter. Which, given this anti-abuse filter is automatically on for all users, rather than just an optional toggle, makes sense.

Screen Shot 2015-04-21 at 7.02.43 PM

Twitter is also making two policy changes aimed at tightening the screw on violent threats by widening what it said was an “unduly narrow” definition of threats before — which sounds very much like it’s aimed at tackling terrorist propaganda spread via Twitter.

“We are updating our violent threats policy so that the prohibition is not limited to “direct, specific threats of violence against others” but now extends to “threats of violence against others or promot[ing] violence against others.” Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior. The updated language better describes the range of prohibited content and our intention to act when users step over the line into abuse,” writes Doshi.

Twitter is also tweaking its enforcement actions for dealing with abuse violations — adding a new option (in addition to existing ones where it can require users to delete content or verify their phone number) that gives its support team the ability to lock abusive accounts for specific periods of time. So to throttle the velocity of trolling episodes by locking abuses out of their accounts.

“This option gives us leverage in a variety of contexts, particularly where multiple users begin harassing a particular person or group of people,”  adds Doshi.

Many people who have received abuse via Twitter, myself included, have observed trolling often follows a pattern — with a sustained wave of harassment directed at the target over a relatively compact time period, which appears to have been redirected to Twitter from another online platform, where (presumably) the original call to troll was posted. If Twitter can temporarily lock out accounts when one of these mass trolling events is taking place that’s a potential way to defang and defuse a co-ordinated bullying campaign. (Although abusers could still create new accounts to restart the abuse.)

“While dedicating more resources toward better responding to abuse reports is necessary and even critical, an equally important priority for us is identifying and limiting the incentives that enable and even encourage some users to engage in abuse,” adds Doshi.

Twitter has been doing outreach on online misogyny. Last November it heard evidence from a team of academics affiliated with Lancaster University who had been researching online misogyny and rape threats made using Twitter for a research project that started in November 2013. The Discourses of Online Misogyny (DOOM) team was also aiming to develop methods and tools for analyzing online hate speech, such as building up linguistic profiles of abusers and identifying community-specific lexis in order to aid automating the detection of online abuse and abusers. It looks likely that Twitter is drawing on some of that research here.

Discussing the DOOM project’s work last November, which focused specifically on Twitter data relating to the Criado-Perez abuse case, researcher Mark McGlashan, said the team had combined linguistic analysis with social network analysis to look at how people affiliate via language, create communities and use discourse in a way that forms groups of trolls who deliberate join together to abuse people online.

“What we need to do is now build up profiles for how the kind of language they use is it community-specific language,” he told TechCrunch at the time. “It does look like there’s community specific languages — like misspellings of rape… [such as] ‘raep’. And they call themselves ‘raep crew’… so there’s definitely in-group markers and community specific lexis that they use.

“If there are further incidents, or if it occurs again and it occurs quite frequently, can we build up a dataset and a profile so we can automatically detect people who are being abusive in a similar way… The methodology I’ve come up with it’s now pretty much automated so if you come up with a profile to detect them automatically isn’t too far off.”

“You also see a crossover between misogyny and homophobia. And misogyny and racism and antisemitism… so within this group the kind of language they’re using is homophobia, racism, sexism, misogyny, that’s who they are,” McGlashan added.

Another more recent piece of research into antisocial behavior online, coming out of Cornell and Stanford Universities, also looks promising for algorithmically defeating trolls. The research suggests future trolling behavior that ends up being severe enough to get a user banned from an online community can be predicted in advance by analyzing just a handful (five to 10) of posts. The researchers say their analysis was able to predict with over 80% accuracy whether a user would be subsequently banned.