Twitch today is announcing an update to its Community Guidelines that aim to clarify how the company will enforce its existing anti-harassment and hateful content policies, while also ramping up the attention paid to those channels that breach its sexual content guidelines.
In the case of the hateful content and anti-harassment guidelines, Twitch says that any hateful conduct will result in an immediate indefinite suspension.
“Hate simply has no place in the Twitch community,” the company said in an announcement.
It’s also expanding its enforcement of hateful conduct and harassment to include those actions that take place off-site. That is, if a Twitch user turns to other services to harass another Twitch user, Twitch will now consider their actions in breach of Twitch’s policies, too.
However, the company itself will not be scanning all of the web and social media to look for such activity – instead, victims of the harassment can send documentation to Twitch directly and ask for a review.
The company tells us these documented examples can come from “any source,” but Twitch will need to be able to personally verify them before taking action.
That seems to imply that Twitch would consider hateful or harassing tweets in violation of the policy, but may not take into account someone who harassed another via text message, as that’s not independently verifiable. (And because those screenshots could be faked).
In addition, Twitch is updating its moderation framework to help better enforce its policies. It will now pay close attention to the “context and intent” of the words used, not just the words themselves or the actions that took place, it says.
The company may even crack down on inappropriate jokes between friends that could be considered harassment when viewed by others; it reminds streamers that they’re still “broadcasting on service that reaches a wide audience” and should act appropriately.
Twitch today has grown to over 2 million streamers, 27,000 of whom are Partners generating revenue from their videos. Another 150,000 are mid-tier streamers called Affiliates, who can also take advantage of Twitch’s money-making tools.
With its growing base of content creators, Twitch is likely trying to keep its service from running into the same problems that have recently plagued YouTube. On Google’s video site, several top creators have found themselves in hot water over their content and comments they’ve made.
For example, YouTube had to drop top creator Logan Paul from its Google Preferred ad program, after he recorded video footage of a suicide victim. Its most popular broadcaster PewDiePie has repeatedly made racist comments. Kids’ video creators were banned for child endangerment and exploitation.
Twitch likely wants to avoid similar headlines, especially since many of its creators now do more than live stream games – they post non-game content on creative channels and record vlogs.
Above: Twitch’s IRL site today
The company is also getting tougher on policing channels for sexual content. The general guideline is that all profile and channel imagery, streams and attire should be appropriate for a public street, mall or restaurant.
“Twitch is an open global community with users of many ages and cultures. Because of this, it’s
important that your content is not sexual in nature,” Twitch’s announcement states.
While the company has had a long-standing policy regarding sexual content, it will now review a streamer’s conduct in its entirely to determine if it’s intended to be sexually suggestive. That means moderators will look at things like the stream’s title, camera angles used, emotes, panels, the gamer’s attire, overlays, and chat moderation.
Twitch says the policy changes came at the request of the community itself, as people weren’t sure where Twitch was drawing the line, or, in some cases, felt the policies weren’t strong enough.
Image credit: The Verge
“Twitch has always had a robust set of anti-harassment policies in place,” a Twitch spokesperson told TechCrunch. “The new changes we are making to our policies provide even more clarity on what is and isn’t allowed and enable our moderators to use additional factors in determining if guidelines have been broken,” they said.
In reality, Twitch has had a number of issues with harassment on its site. For example, a Canadian man was charged with unleashing a spambot on the site that filled it with abusive messages, including racist, homophobic, and sexually harassing comments. Many Twitch streamers also re-stream others’ livestreams and comment on them in a harassing manner. An IRL Twitch streamer was harassed and doxed for days on Facebook,
In a high-profile example last year, a woman wrote about a Twitch streamer live broadcasting their sexual harassment towards her. And lets not forget that members of the gamer community were responsible for one of the worst examples of online harassment across social media, by way of Gamergate.
Twitch didn’t say if increased the size of its moderation team to coincide with these updates, but it offers community members a variety of content moderation tools, including the ability to assign moderators to police their chat, and a “Report” button for channels that alerts Twitch’s 24/7 human moderation team.
There’s a brief transition time before the changes go into effect on Monday, February 19 at 9 AM PT. This will allow streamers to come into compliance by removing clips and their on-demand videos that violate the guidelines. Twitch is also reaching out directly to those streamers whose current and past content has been an issue.
The changes are a part of what may be a larger overhaul of Twitch’s policies.
The company says it will also improve its automated chat moderation system AutoMod in the coming months, and revisit its enforcement policies for both partners and non-partners, its appeals process, IRL guidelines, and preventing user-to-user harassment.