Twitter claims its anti-abuse measures are helping, though many still disagree

Twitter today offered an update on how its improvements to user safety and its newer anti-abuse measures are having an impact. The company declined to share any hard numbers related to abuse on its network, including how many abuse reports have been filed, or the number of actions it’s taken, for example, noting instead across the board percentage increases on things like account suspensions, implementations to limit abusers’ account functionality, and the further results of those actions.

The report comes only days after a damning article from BuzzFeed painted a picture of a social network still afflicted by a systemic abuse problem, in which it described Twitter’s anti-harassment controls as a “largely cosmetic solution” and its algorithmic moderation systems as not as effective “as the company would like to think.”

The truth is that BuzzFeed and Twitter are both right. Twitter has implemented a number of anti-abuse controls that didn’t exist in the past, which are having some impact on Twitter’s abuse and remediation metrics. But BuzzFeed is right in calling out the network for not having done enough to actually curb the problem.

BuzzFeed’s analysis of the situation – which also highlights several personal stories from abuse victims – speaks to a network where Twitter is still slow to respond to abuse reports, then often returns with improper dismissals of users’ harassment claims.

In short, Twitter is still a network that enables trolls to thrive, and have a voice. In a simpler world, Twitter would just ban abusers once and for all, instead of toying with measures like “limited account functionality,” which is the internet equivalent of a slap on the hand. But unfortunately for all of us, Twitter needs its user numbers to grow, not stagnate or drop thanks to widescale account bans.

According to Twitter’s announcement today, the company claims to be taking action on 10 times the number of abusive accounts every day, compared with the same time last year. It says that its new systems which remove repeat offenders who create new accounts after suspensions have removed twice the number of these types of accounts in the last four months.

It also says that those accounts that are put into a limited functionality mode are told why, and this has resulted in 25 percent fewer abuse reports from those accounts. In addition, 65 percent of those accounts are only put into this mode one time. (That latter stat, though designed to paint a picture of a system that works, could also be flipped on its head – perhaps some of those accounts deserved a second action, but didn’t receive one?)

Twitter noted, too, its muting tools are being adopted, and blocks after @mentions from people you don’t follow are down 40 percent.

The company also seemed to be responding directly to the characterization of its network as described by BuzzFeed when it wrote:

“We have consistent harassment definitions and policies that apply to everyone. However, people define abuse differently, so using these new tools, every person has control of what they see and experience on Twitter.”

There’s perhaps some truth to the statement that everyone defines abuse differently, and that’s even more of an issue during a time when our larger culture is struggling with where to draw with free speech; there are those who hold opinions that people deserve to be treated with respect, and there are those who will then demean that group as “snowflakes” who can’t handle even the slightest negativity in their lives.

There are other questions here that need to be answered – like, to what extent does a social network like Twitter amplify those differences, then back people into corners related to their respective positions? Is it perhaps possible that Twitter’s very existence and the way it was designed encourages people to break the age-old internet rule: “don’t feed the trolls?” Could it be that Twitter’s embrace of anonymity – despite the valid reasons to do so (like allowing those under authoritarian regimes to have a voice) – actually does more harm than good in the long run?

Twitter surely needs to do more – there should be no hand-slaps for people who tweet out violent threats or disclose personal information on private citizens – like where they live – as a means of threatening them. But to what end can it really create a kinder, gentler breeding ground for online discourse, when we’ve proven as a people that’s something we’re not capable of?