In February, Twitter confirmed its plans to launch a feature that would allow users to hide replies that they felt didn’t contribute to a conversation. Today, alongside news of other changes to the reporting process and its documentation, Twitter announced the “Hide Replies” feature is set to launch in June.
Twitter says the feature will be an “experiment” — which means it could be changed or even scrapped, based on user feedback.
The feature is likely to spark some controversy, as it puts the original poster in control of which tweets appear in a conversation thread. This, potentially, could silence dissenting opinions or even fact-checked clarifications. But, on the flip side, it also means that people who enter conversations with plans to troll or make hateful remarks are more likely to see their posts tucked away out of view.
This, Twitter believes, could help encourage people to present their thoughts and opinions in a more polite and less abusive fashion, and shifts the balance of power back to the poster without an overcorrection. (For what it’s worth, Facebook and Instagram give users far more control over their posts, as you can delete trolls’ comments entirely.)
“We already see people trying to keep their conversations healthy by using block, mute, and report, but these tools don’t always address the issue. Block and mute only change the experience of the blocker, and report only works for the content that violates our policies,” explained Twitter’s PM of Health Michelle Yasmeen Haq earlier this year. “With this [‘Hide Replies’] feature, the person who started a conversation could choose to hide replies to their tweets. The hidden replies would be viewable by others through a menu option.”
In other words, hidden responses aren’t being entirely silenced — just made more difficult to view, as displaying them would require an extra click.
Twitter unveiled its plans to launch the “Hide Replies” feature alongside a host of other changes it has in store for its platform, some of which it had previously announced.
It says, for example, it will add more notices within Twitter for clarity around tweets that break its rules but are allowed to remain on the site. This is, in part, a response to some users’ complaints around President Trump’s apparently rule-breaking tweets that aren’t taken down. Twitter’s head of legal, policy and trust Vijaya Gadde recently mentioned this change was in the works, in a March interview with The Washington Post.
Twitter also says it will update its documentation around its rules to be simpler to understand. And it will make it easier for people to share specifics when reporting tweets so Twitter can act more swiftly when user safety is a concern.
This latter change follows a recent controversy over how Twitter handled death threats against Rep. Ilhan Omar. Twitter left the death threats online so law enforcement could investigate, according to a BuzzFeed News report. But the move raised questions as to how Twitter should handle threats against a user’s life in the future.
More vaguely, Twitter states it’s improving its technology to help it proactively review content that breaks rules before it’s reported — specifically in the areas of those who dox users (tweet private information), make threats and other online abuse. The company didn’t go in-depth as to how it’s approaching these problems, but it did acquire anti-abuse technology provider Smyte last year, with the goal of better addressing the abuse on its platform.
Donald Hicks, VP Twitter Services, in a company blog post, hints Twitter is using its existing technology in new ways to address abuse:
The same technology we use to track spam, platform manipulation and other rule violations is helping us flag abusive Tweets to our team for review. With our focus on reviewing this type of content, we’ve also expanded our teams in key areas and geographies so we can stay ahead and work quickly to keep people safe. Reports give us valuable context and a strong signal that we should review content, but we’ve needed to do more and though still early on, this work is showing promise.
Twitter also today shared a handful of self-reported metrics that paint a picture of progress.
This includes the following:
Today, 38 percent of abusive content that’s enforced is handled proactively (note: much content still has no enforcement action taken, though); 16 percent fewer abuse reports after an interaction from an account the reporter doesn’t follow; 100K accounts suspended for returning to create new accounts during Jan.-March 2019, a 45 percent increase from the same time last year; a 60 percent faster response rates to appeals requests through its in-app appeal process, 3x more abusive accounts suspended within 24 hours, compared to the same time last year; and 2.5x more private info removed with its new reporting process compared with the old process.
But these are largely “vanity metrics,” as they don’t offer real, hard numbers about the extent of abuse on Twitter. One hundred thousand accounts may have been caught, but how many were not? Three times more abuse accounts suspended — but how many were there? How fast is private info actually taken down? How many people appealed their reports? How many feel the report resolved their problem? How many abuse reports in total are there? Is that number growing or declining? What percent of the user base has used the reporting process because of harassment towards themselves? And so on.
Despite Twitter’s attempts to solve issues around online abuse, it still drops the ball in handling what should be straightforward decisions. It’s not necessarily alone here, however. All of social media is at a crossroads, having built platforms that cater to engagement over health and safety; they’re now trying to back pedal furiously ahead of increased regulation.
But the problem is that human behavior is what it is. A giant public square will only bring out the worst of us. Twitter is a perfect example.
The changes to Twitter were announced as Twitter CEO Jack Dorsey took the stage at TED 2019 on Tuesday, where he admitted the platform’s failings in terms of online abuse.
The company admits it still has more to do, and will continue to share its progress in the future.