Twitter finally boots hate group that Trump retweeted off its platform

Yesterday Twitter said it would begin enforcing new hate speech rules to shutter accounts that promote violence against citizens to further their causes.

The same day it suspended the accounts of the far right British hate group, Britain First, along with the accounts of its leader Paul Golding, and deputy Jayda Fransen.

Fransen, whose far right anti-muslim hate group has never achieved any sniff of electoral success, was nonetheless thrust into the mainstream limelight last month after president Trump retweeted three anti-muslim videos she had shared to his 44.8M followers — earning a personal rebuke from the UK prime minister, and condemnation from MPs across the domestic political spectrum for amplifying hate speech.

For a little wider context on Fransen, last year she was found guilty of religiously aggravated harassment after abusing a Muslim woman who was wearing a hijab.

Both her and Golding were also arrested in the UK earlier this week on charges relating to behavior intended to or likely to stir up hatred in Northern Ireland.

Curiously, a ‘Twitter People’ search for ‘Jayda Fransen’ which TechCrunch carried out today initially resulted in the first suggestion being @realDonaldTrump‘s account.

Some minutes later his account was no longer being algorithmically linked to Fransen — suggesting some human eyes at the company had locked onto the recommendation AI’s problem.

The three posts made by Fransen that Trump retweeted from his personal account last year also appear to now be gone from his feed — presumably as a result of the account being shut down.

Last month Twitter faced criticism for not removing the three tweets — but defended its action, telling CNN that there “may be the rare occasion when we allow controversial content or behavior which may otherwise violate our rules to remain on our service because we believe there is a legitimate public interest in its availability”.

“Each situation is evaluated on a case by case basis and ultimately decided upon by a cross-functional team,” the spokesperson added then.

Weeks on, the Britain First “case” appears to be closed — insofar as Twitter is concerned.

However it’s a different story on Facebook where the group’s Facebook page has more than 1.9M likes — and includes a shop where Facebook users can make a donation to the hate group or pay the hate group to become a member.

On its Facebook page Britain First can also be found complaining that YouTube is now demonitizing and gating its videos behind warning notices.

Earlier this year the Google-owned platform faced an advertiser backlash when marketing was shown being displayed alongside hateful and offensive content — and said it would be taking “a tougher stance on hateful, offensive and derogatory content.”

Last month YouTube also confirmed a major policy shift regarding extremist content — saying it would be broadening takedowns to expand out from videos that directly incite violence to also removing non-violent extremist content.

Although, in this instance, it does not appear that Alphabet/Google is removing Britain First videos entirely.

Rather it’s making it harder for the group to use its mainstream platform to profit from their far right fringe activities.

We asked Facebook why it continues to allow the hate group to maintain a presence on its platform — which is demonstrably being used as a route for fundraising — but Facebook declined to comment.

Instead a spokeswoman pointed us to this company blog, from Facebook’s self-styled ‘Hard Questions‘ series, on the topic of hate speech.

In the post, Facebook’s Richard Allan, VP EMEA public policy, writes [emphasis mine]: “Our current definition of hate speech is anything that directly attacks people based on what are known as their “protected characteristics” — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.”