U.K. ISPs Agree To Do More To Filter Extremist Content Online

The U.K. government’s latest crack down on terrorism is once again focusing on the digital sphere, with Prime Minister David Cameron announcing that major Internet companies have agreed to do more to tackle terrorist and extremist material online — by “introducing stricter filters, increased industry standards and better reporting mechanisms”.

Giving a speech in Australia today, Cameron said he was intent on getting Internet companies to be “more pro-active” in the filtering of the content on their platforms.

One incoming measure detailed by the government is a “public reporting button for extremist and terrorist material online” which it said four major U.K. ISPs (BT, Virgin, Sky and Talk Talk) have committed to host. This will apparently be similar to the reporting button which allows the public to report child sexual exploitation on the Internet.

The government also claimed ISPs have agreed to tighten their filters to “ensure that terrorist and extremist material is captured” — in a bid to prevent children and young people coming across radicalizing material online.

And apparently it’s not just ISPs involved in these government-led negotiations. Downing Street said Facebook, Google, Yahoo and Twitter have also agreed to “support smaller industry players to raise their standards and improve their capacity to deal with this material”. Whatever that means.

TechCrunch contacted all four Internet companies but at the time of writing three had failed to respond, while a Twitter spokesperson declined to comment — after claiming to know nothing about “any such arrangement beyond our existing guidelines for law enforcement“.

U.K. ISPs such as BT, meanwhile, already offer parental controls — allowing parents to apply light, moderate or strict filters to inbound Internet traffic to block specific types of content such as pornography, drugs, tobacco and alcohol.

But the government’s announcement suggest an expansion of their filtering efforts is on the cards — although it is not clear exactly how ‘extremist’ content will be defined, and whether filtering of such content will be opt-in or automatically applied by ISPs. All of those details are apparently tbc. So the core of today’s news is that ISPs have agreed in principle to do more — whatever more ends up meaning. And the U.K. PM has banged a public drum about cracking down on terrorist content online. This, folks, is politics.

None of the ISPs or Internet companies TechCrunch contacted for more details were exactly keen to talk. Most declined to comment in detail saying they were waiting for statements to be signed off, or that it was too soon to talk about specifics. Some seemed genuinely surprised the government had made the announcement at this point, as if they’d pulled the trigger early.

A spokesman for the Prime Minister’s office also couldn’t provide specific details like a timeframe for implementing the public report extremism button, characterizing the deal with ISPs as a “high level agreement”, and adding that the government is delighted ISPs are taking a role. “We now need to sit down and work out how it will work in practice,” he said.

He added that government-led negotiations with other Internet companies such as social media outlets were aimed at encouraging “some of the larger firms to take a leading role” — whether by sharing best practice or resources with smaller web entities to help them identify and remove extremist content.

The spokesman pointed to the sophistication of terrorist group ISIS’ use of social media as one impetus to get social platforms to do more to combat extremists online. “There’s a hugely important role for the likes of Facebook,” he told TechCrunch, adding: “This isn’t about censorship of freedom of expression, it’s about tackling extremist, terrorist media.”

Figuring out how to define extremist content is something that has clearly yet to be determined, and will be a key requirement for moving these agreements in principle forward. The spokesman suggested there may be a need for independent oversight or pre-agreed definitions. “It hasn’t really been formalized or discussed. It will certainly have to be,” he added.

The U.K. already removes thousands of pieces of online content, with a dedicated law enforcement unit called the Counter Terrorism Internet Referral Unit (CTIRU) instigating the removal of more than 55,000 pieces of online content in the four years since it was set up — some 34,000 of which have been removed since December 2013.

But evidently the government wants a more pro-active response to combat viral online propaganda tactics being used by groups like ISIS, and is applying pressure to major Internet companies to help. This is clearly going to be controversial, given the difficulties of defining extremist content and the resulting risks to freedom of speech and expression online. But that’s not going to stop the government trying.

In a statement provided to TechCrunch, Sky said: “We’re exploring ways in which we can help our customers report extremist content online, including hosting links on our website.”

BT also provided a statement, saying: “We have had productive dialogue with Government about addressing the issue of extremist content online and we are working through the technical details.”

A Virgin Media spokesperson added: “We’re exploring options that will enable more extremist content to be filtered and reported online. We’ll continue talks with Government as we work through the technical details.”