TikTok to diversify its ‘For You’ feed, let users pick the topics they want to avoid

TikTok announced this morning it’s taking a new approach to its “For You” feed — the short-form video app’s main feed, powered by its algorithmic recommendations. The company already detailed how its algorithm works to suggest videos based on users’ engagement patterns within its app, but admits that too much of a particular content category can be “problematic.” The company now says it’s working to implement new technology to interrupt “repetitive patterns” on its app and is also developing a tool that would allow users to have a say in the matter by letting them pick which topics they want to avoid.

The company explains in its announcement that “too much of anything, whether it’s animals, fitness tips, or personal well-being journeys, doesn’t fit with the diverse discovery experience we aim to create.” However, TikTok isn’t diversifying its algorithm because people are complaining of seeing one too many cute puppy videos — it’s doing so because regulators are cracking down on tech and questioning the harmful impacts of unchecked recommendation algorithms, particularly when it comes to teen mental health.

Facebook and Instagram execs, along with those from other social platforms, have been hauled into Congress and questioned about how their apps have been directing users to dangerous content — including to topics like pro-anorexia and eating disorder content, for example.

TikTok, in its announcement, makes mention of the types of videos that could be harmful if viewed in excess, including things like “extreme dieting or fitness,” “sadness” and “breakups”-themed videos. While a user who shows interest in videos of this nature may find them interesting, the algorithm isn’t yet smart enough to know that directing the user to more of the same, repeatedly, could actually do the user harm. This problem is not limited to TikTok, of course. Across the board, it’s becoming clear that systems designed only to increase user engagement through automated means will do so at the expense of users’ mental health. While Congress is currently most interested in how these systems impact young people, some studies, though debated, have indicated that unchecked recommendations algorithms may also play a role in radicalizing users who could be drawn to extreme views.

TikTok says it will also test new ways to avoid recommending a series of similar content when users watch and engage with videos in these potentially harmful types of videos. But it only offered examples of the types of videos it would limit, not a full list.

In addition, the company said it’s developing technology that will help it to recognize when a user’s “For You” page isn’t very diverse. While the user may not be watching videos that actually violate TikTok’s policies, the company said that viewing “very limited types of content…could have a negative effect if that’s the majority of what someone watches, such as content about loneliness or weight loss.”

Another strategy TikTok plans to roll out includes a new feature that would allow people to direct the algorithm themselves. They would be able to use this feature to choose words or hashtags associated with content they don’t want to see in their “For You” feed. This would be in addition to TikTok’s existing tools to flag videos you don’t like by tapping “Not Interested,” for example. 

To be clear, TikTok’s announcement today is only about laying out a roadmap of its plans, not the actual launch of such changes and features. Instead, it’s an attempt to hold off regulators from further investigations into its app and its potentially harmful effects. Its strategy was likely informed by the types of questions asked of it both during its own congressional hearing and those of its rivals.

TikTok notes that the actual implementation could take time and iteration before it gets things right.

“We’ll continue to look at how we can ensure our system is making a diversity of recommendations,” the company noted.