Google to ramp up AI efforts to ID extremism on YouTube

Last week Facebook solicited help with what it dubbed “hard questions” — including how it should tackle the spread of terrorism propaganda on its platform.

Yesterday Google followed suit with its own public pronouncement, via an op-ed in the FT newspaper, explaining how it’s ramping up measures to tackle extremist content.

Both companies have been coming under increasing political pressure in Europe especially to do more to quash extremist content — with politicians including in the UK and Germany pointing the finger of blame at platforms such as YouTube for hosting hate speech and extremist content.

Europe has suffered a spate of terror attacks in recent years, with four in the UK alone since March. And governments in the UK and France are currently considering whether to introduce a new liability for tech platforms that fail to promptly remove terrorist content — arguing that terrorists are being radicalized with the help of such content.

Earlier this month the UK’s prime minister also called for international agreements between allied, democratic governments to “regulate cyberspace to prevent the spread of extremism and terrorist planning”.

While in Germany a proposal that includes big fines for social media firms that fail to take down hate speech has already gained government backing.

Besides the threat of fines being cast into law, there’s an additional commercial incentive for Google after YouTube faced an advertiser backlash earlier this year related to ads being displayed alongside extremist content, with several companies pulling their ads from the platform.

Google subsequently updated the platform’s guidelines to stop ads being served to controversial content, including videos containing “hateful content” and “incendiary and demeaning content” so their makers could no longer monetize the content via Google’s ad network. Although the company still needs to be able to identify such content for this measure to be successful.

Rather than requesting ideas for combating the spread of extremist content, as Facebook did last week, Google is simply stating what its plan of action is — detailing four additional steps it says it’s going to take, and conceding that more action is needed to limit the spread of violent extremism.

“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now,” writes Kent Walker, general counsel 

The four additional steps Walker lists are:

  1. increased use of machine learning technology to try to automatically identify “extremist and terrorism-related videos” — though the company cautions this “can be challenging”, pointing out that news networks can also broadcast terror attack videos, for example.”We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months. We will now devote more engineering resources to apply our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove extremist and terrorism-related content,” writes Walker
  2. more independent (human) experts in YouTube’s Trusted Flagger program — aka people in the YouTube community who have a high accuracy rate for flagging problem content. Google says it will add 50 “expert NGOs”, in areas such as hate speech, self-harm and terrorism, to the existing list of 63 organizations that are already involved with flagging content, and will be offering “operational grants” to support them. It is also going to work with more counter-extremist groups to try to identify content that may be being used to radicalize and recruit extremists.
    “Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern,” writes Walker.
  3. a tougher stance on controversial videos that do clearly violate YouTube’s community guidelines — including by adding interstitial warnings to videos that contain inflammatory religious or supremacist content. Googles notes these videos also “will not be monetised, recommended or eligible for comments or user endorsements” — idea being they will have less engagement and be harder to find. “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” writes Walker.
  4. expanding counter-radicalisation efforts by working with (other Alphabet division) Jigsaw to implement the “Redirect Method” more broadly across Europe. “This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages,” says Walker.

Despite increasing political pressure over extremism — and the attendant bad PR (not to mention threat of big fines) — Google is evidently hoping to retain its torch-bearing stance as a supporter of free speech by continuing to host controversial hate speech on its platform, just in a way that means it can’t be directly accused of providing violent individuals with a revenue stream. (Assuming it’s able to correctly identify all the problem content, of course.)

Whether this compromise will please either side on the ‘remove hate speech’ vs ‘retain free speech’ debate remains to be seen. The risk is it will please neither demographic.

The success of the approach will also stand or fall on how quickly and accurately Google is able to identify content deemed a problem — and policing user-generated content at such scale is a very hard problem.

It’s not clear exactly how many thousands of content reviewers Google employs at this point — we’ve asked and will update this post with any response.

Facebook recently added an additional 3,000 to its headcount, bringing the total number of reviewers to 7,500. CEO Mark Zuckerberg also wants to apply AI to the content identification issue but has previously said it’s unlikely to be able to do this successfully for “many years”.

Touching on what Google has been doing already to tackle extremist content, i.e. prior to these additional measures, Walker writes: “We have thousands of people around the world who review and counter abuse of our platforms. Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal. And we have developed partnerships with expert groups, counter-extremism agencies, and the other technology companies to help inform and strengthen our efforts.”