X says 325K posts and 375K accounts ‘actioned’ over Israel-Hamas war violations

X, the social platform formerly known as Twitter, has been facing waves of criticism over how it has, under owner Elon Musk, grappled with issues of trust and safety, and specifically how well it handled content moderation and taking down malicious or harmful posts and accounts.

Today, the company, which said it now has over 500 million monthly visitors to its platform, published some figures and updates on how it’s been coping with one major test case of all that: the Israel-Hamas war and content related to fake news, abusive behaviour and violence.

Topline figures include 325,000 pieces of content “actioned” over violent and hateful conduct violations and 375,000 accounts suspended or limited. More details on actions taken further down.

That X, which is now a privately-held company, feels compelled to publish anything at all, speaks to the company’s continuing efforts to play nice as it tries to court advertisers.

It also comes on the same day that the company faced its latest critique. The Center for Countering Digital Hate today published research in which it found that, of 200 posts reported multiple times for hate speech related to the conflict, 196 remained online. (Read more on its findings here.)

To be clear, the figures published today by X have no outside vetting to verify how accurate they are. And X provides no comparative internal numbers to speak to the overall size of the problem.

Trust and Safety on the platform has been an ongoing challenge for X and many draw a direct line between that and the company’s user growth, as well as its standing with large advertisers.

Research published in October (via Reuters) from just before Hamas’ first attacks in Israel found that each of the last 10 months saw advertising revenue declines in the U.S. (its biggest market) of 55% or more.

Here are highlights from X’s update:

X Safety said it has “actioned” more than 325,000 pieces of content that violate the company’s Terms of Service, including its rules on violent speech and hateful conduct.

“Actioned” includes taking down a post, suspending the account or restricting the reach of a post. X previously also announced that it would remove monetization options for those posts (using Community Notes corrections as part of that effort).

X said that 3,000 accounts have been removed, including accounts connected to Hamas.

X added that it has been working to “automatically remediate against antisemitic content” and “provided our agents worldwide with a refresher course on antisemitism.” It doesn’t specify who these agents are, how many there are, where they are located nor who provides the refresher course and what is in that course.

X has an “escalations team” that has actioned more than 25,000 pieces of content that fall under the company’s synthetic and manipulated media policy — that is fake news, or content created using AI and bots.

It has also targeted specific accounts related to this: More than 375,000 have been suspended or otherwise restrained, it said, due to investigations into “authentic conversation” around the conflict.

This has included coordinated/inauthentic engagement, inauthentic accounts, duplicate content and trending topic/hashtag spam, it added. This is ongoing, although again there is no clarity on methodology. In the meantime, X said it’s also looking at disrupting “coordinated campaigns to manipulate conversations related to the conflict.”

Graphic content, X said, continues to be allowed if it’s behind a sensitive media warning interstitial and is newsworthy, but it will remove those images if they meet the company’s “Gratuitous Gore” definition. (You can see more on this and other sensitive content definitions here.) The company did not disclose how many images or videos have been flagged under these two categories.

Community Notes — X’s Wikipedia-style, crowdsourced moderation — have come under scrutiny from critics of the platform in the last month. With most of the company’s in-house Trust and Safety team now gone, and no outside vetting of how anything is working, but a lot of evidence of abuse on the platform, in many ways, Community Notes has come to feel like X’s first line of defense against misleading and manipulative content.

But if that’s the case it’s an unequal match. Relative to the immediacy of posting and sharing on the platform itself, it can take weeks to be approved as a Community Note creator, and then these notes can sometimes take hours or even days to publish.

Now, X has provided some updates on how it’s going. X said in the first month of conflict, notes related to posts were viewed more than 100 million times. It now has more than 200,000 contributors in 44 countries in the program, with 40,000 added since the beginning of the fighting.

It added that it is trying to speed up the process. “They are now visible 1.5 to 3.5 hours more quickly than a month ago,” it noted. It’s also automatically populating notes for, say, one video or photo to posts with matching media. And now trying to repair some of the damage of letting fake and manipulating news spread on the platform, if one of those posts gets a Community Note attached to it, that is now sent as an alert. X notes that nearly up to 1,000 of these have been sent out per second — really underscoring the scale of the problem of how much malicious content is being spread on the platform.

If there is a motivation for why X is posting all this today, I would have guessed “money.” And indeed, the final data points it outlines here are relative to “Brand Safety;” that is, how advertisers and would-be advertisers are faring in all of this, running ads against content that violates policies.

X notes that it has proactively removed more than 100 publisher videos “not suitable for monetization” and that its keyword blocklists have gained more than 1,000 more terms related to the conflict, which in turn will block ad targeting and adjacency on Timeline or Search placements.

“With many conversations happening on X right now, we have also shared guidance on how to manage brand activity during this moment through our suite of brand safety and suitability protections and through tighter targeting to suitable brand content like sports, music, business and gaming,” it added.