Google’s new project aims to clean up comment sections

If you read stuff on the internet (and obviously you do because hi, you’re reading a blog) then you know the golden rule: never read the comments.

Scrolling past the end of a story is an adventure into a realm of racism, conspiracy theories and ad hominem attacks that will quickly make you lose your faith in humanity. But instead of encountering Godwin’s Law in the comments, you might start encountering Google instead. Google’s internet safety incubator Jigsaw launched new technology today called Perspective, intended to clean up comment sections.

Perspective reviews comments and assigns them a toxicity rating that reflects the likelihood that the comment is intended to be harmful. Jigsaw’s goal is to keep people engaged in the conversation, so it assesses “harm” as something that would drive other commenters away.

“Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation,” Jigsaw president Jared Cohen said. “Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.”

How to interpret and react to a toxicity rating is up to publishers. Jigsaw won’t do anything except provide the score, so the comments can be flagged for human review or hidden behind a warning so readers have to click through to see them. Commenters can also be confronted with their own toxicity rating so they can make a choice about whether that’s really what they want to say.

Media outlets have been struggling to come up with solutions to the comment problem on their own. Outlets like Reuters have deleted their comment sections outright, while BuzzFeed is experimenting with curated comments. The New York Times partnered with Jigsaw to help develop Perspective — the paper receives 11,000 comments per day, which Jigsaw used to feed its machine learning model.

Perspective goes beyond just flagging keywords like racial slurs and also considers the context in which they are used to determine whether they are part of a direct attack on another commenter or on the subject of the story. The technology will be made available to publishers through an API.

Jigsaw also studied harassment in Wikipedia discussions to inform its work. The company scraped more than a million annotations from Wikipedia talk pages, where editors debate changes to Wikipedia articles, for its analysis. Ten judges rated each edit to determine whether it contained a personal attack and to whom the attack was directed. The judges’ opinions were then used to train Perspective. Jigsaw only used edits written in English, so Perspective can only moderate in English — for now, at least.

The research on Wikipedia comments found that only 18 percent of attackers received a warning or a block from moderators, so most harassment on the platform went unchecked. Jigsaw modeled its research on earlier work at Yahoo. The company built an algorithm trained on comments flagged as abusive by Yahoo comment moderators — and was ultimately able to detect abuse with a 90 percent success rate.

Perspective is just the latest tool to come out of Jigsaw — the incubation wing of Google has also worked on mitigating distributed denial of service attacks and fact-checking the news.