Twitter today starts enforcing new rules around violence and hate

Twitter today says it will begin to enforce new rules related to how it handles hateful conduct and abusive behavior taking place on its platform. The changes are a part of the company’s broader agenda to craft new policies focused on reducing the amount of abuse, hate speech, violence and harassment.

Specifically, Twitter explains that in addition to threatening violence or physical harm, it will also look for accounts affiliated with groups that promote violence against citizens to further their causes. This includes any group that promotes violence either on or off Twitter’s platform, but doesn’t apply to military or government entities. Twitter also says it will consider exceptions for groups that are engaged in peaceful resolution.

Meanwhile, any content that glorifies violence or the perpetrators of a violent act will also be in violation of Twitter’s new policies. That means someone like Jason Kessler, the organizer of the white supremacist rally in Charlottesville, Virginia, could be banned for tweets like the one he posted about Heather Heyer, the protester killed at the event, which essentially supported the violence that led to her murder.

Twitter further details that its policies will include celebrating “any violent act in a manner that may inspire others to replicate it or any violence where people were targeted because of their membership in a protected group.”

A single tweet won’t result in an immediate expulsion from the service, however. Instead, Twitter will initially require offending tweets to be removed. It will consider permanent suspension only for repeat violations.

In addition, Twitter says it’s broadening its hateful conduct policy and rules against abusive behavior to include those accounts that abuse or threaten others through their profile information, like their username, display name, or profile bio.

That means users can’t hide their slurs, epithets, and racist or sexist tropes in their bio without a penalty. Twitter will also look in profile information for violent threats, any statements meant to incite fear, or anything else that reduces someone to “less than human.”

These accounts will be permanently suspended, and the company plans to develop internal tools to help it identify accounts in violation to supplement user reports.

Rules around hateful imagery will also now be enforced.

Twitter had previously explained that would consider hateful imagery and hate symbols “sensitive media,” a category that also includes adult content and graphic violence. Today, it’s defining hateful imagery as “logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, or ethnicity/national origin.”

In this case, Twitter will accept profile-level reports and require the account owners to remove the violating media.

Twitter’s lack of policy and consistent enforcement over the years has led to a surge in hate speech on its platform, particularly from neo-Nazis and other members of the alt-right. But with Twitter’s promised crackdown in the works, many have since fled to the alt-right’s version of Twitter, Gab. That network may benefit again with an influx of exiting alt-righters following today’s changes. (The alt-right, by the way, has built its own set of alternatives to mainstream social media. But many of the sites are plagued with bugs, a recent report found, and they can struggle to raise funds from traditional venture capital and angel investors.)

Though Twitter’s new policies seem like a decent starting point, many users don’t believe the company will actually enforce the rules it has laid out. Twitter’s history in this area is not great, after all. For example, despite its claims that it was taking abuse more seriously, a BuzzFeed investigation from earlier this year found a number of egregious examples of abuse and threats that slipped through the cracks.

While Twitter says that it will begin its policy enforcement today, it noted that it may still make some mistakes. The company added that it’s also working on a “robust” appeals process for those who are flagged.