U.K. Home Secretary Amber Rudd is holding talks with several major internet companies today to urge them to be more proactive about tackling the spread of extremist content online. Companies in attendance include Google, Microsoft, Twitter and Facebook, along with some smaller internet companies.
We’ve contacted the four named companies for comment and will update this story with any response.
Writing in The Telegraph newspaper on Saturday, in the wake of last week’s terror attack in London, Rudd said the U.K. government will shortly be setting out an updated counterterrorism strategy that will prioritize doing more to tackle radicalization online.
“Of paramount importance in this strategy will be how we tackle radicalisation online, and provide a counter-narrative to the vile material being spewed out by the likes of Daesh, and extreme Right-wing groups such as National Action, which I made illegal last year,” she wrote. “Each attack confirms again the role that the internet is playing in serving as a conduit, inciting and inspiring violence, and spreading extremist ideology of all kinds.”
Leaning on tech firms to build tools appears to be a key plank of that forthcoming strategy.
A government source told us that Rudd will urge web companies today to use technical solutions to automatically identify terrorist content before it can be widely disseminated.
We also understand the home secretary wants the companies to form an industry-wide body to take greater responsibility for tackling extremist content online — which is a slightly odd ask, given Facebook, Microsoft, Twitter and YouTube already announced such a collaboration, in December last year (including creating a shared industry database for speeding up identification and removal of terrorist content).
Perhaps Rudd wants more internet companies to be part of the collaboration. Or else more effective techniques for identifying and removing content at speed to be developed.
At today’s round-table we’re told Rudd will also raise concerns about encryption — another technology she criticized in the wake of last week’s attack, arguing that law enforcement agencies must be able to “get into situations like encrypted WhatsApp.”
Such calls are of course hugely controversial, given how encryption is used to safeguard data from exploitation by bad actors — the U.K. government itself utilizes encryption technology, as you’d expect.
So it remains to be seen whether Rudd’s public call for encrypted data to be accessible to law enforcement agencies constitutes the beginning of a serious clampdown on end-to-end encryption in the U.K. (NB: The government has already given itself powers to limit companies’ use of the tech, via last year’s Investigatory Powers Act) — or merely a strategy to apply high-profile pressure to social media companies in trying to strong-arm them into doing more about removing extremist content from their public networks.
We understand the main thrust of today’s discussions will certainly be on the latter issue, with the government seeking greater co-operation from social platforms in combating the spread of terrorist propaganda. Encryption is set to be discussed in further separate discussions, we are told.
In her Telegraph article, Rudd argued that the government cannot fight terrorism without the help of internet companies, big and small.
“We need the help of social media companies, the Googles, the Twitters, the Facebooks of this world. And the smaller ones, too: platforms such as Telegram, WordPress and Justpaste.it. We need them to take a more proactive and leading role in tackling the terrorist abuse of their platforms. We need them to develop further technology solutions. We need them to set up an industry-wide forum to address the global threat,” she wrote.
One stark irony of the Brexit process — which got under way in the U.K. this Wednesday, when the government formally informed the European Union of its intention to leave the bloc — is that security cooperation between the U.K. and the EU is apparently being used as a bargaining chip, with the U.K. government warning it may no longer share data with the EU’s central law enforcement agency in the future if there is no Brexit deal.
Which does rather throw a sickly cast o’er Rudd’s call for internet companies to be more proactive in fighting terrorism.
Not all of the companies Rudd called out in her article will be in attendance at today’s meeting. Pavel Durov, co-founder of the messaging app Telegram, confirmed to TechCrunch that it will not be there, for instance. The messaging app has frequently been criticized as a “tool of choice” for terrorists, although Durov has stood firm in his defense of encryption — arguing that users’ right to privacy is more important than “our fear of bad things happening.”
Telegram has today announced the rollout of end-to-end encrypted voice calls to its platform, doubling down on one of Rudd’s technologies of concern (albeit, Telegram’s “homebrew” encryption is not the same as the respected Signal Protocol, used by WhatsApp, and has taken heavy criticism from security researchers).
But on the public propaganda front, Telegram does already act to remove terrorist content being spread via its public channels. Earlier this week it published a blog post defending the role of end-to-end encryption in safeguarding people’s privacy and freedom of speech, and accusing the mass media of being the priory conduit through which terrorist propaganda spreads.
“Terrorist channels still pop up [on Telegram] — just as they do on other networks — but they are reported almost immediately and are shut down within hours, well before they can get any traction,” it added.
Meanwhile, in a biannual Transparency Report published last week, Twitter revealed it had suspended a total of 636,248 accounts, between August 1, 2015 through to December 31, 2016, for violations related to the promotion of terrorism — saying the majority of the accounts (74 percent) were identified by its own “internal, proprietary spam-fighting tools,” i.e. rather than via user reports.
Twitter’s report underlines the scale of the challenge posed by extremist content spread via social platforms, given the volume of content uploads involved — which are orders of magnitude greater on more popular social platforms like Facebook and YouTube, meaning there’s more material to sift through to locate and eject any extremist material.
In February, Facebook CEO Mark Zuckerberg also discussed the issue of terrorist content online, and specifically his hope that AI will play a larger role in the future to tackle this challenge, although he also cautioned that “it will take many years to fully develop these systems.”
“Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization. This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide,” he wrote then.
In an earlier draft of the open letter, Zuckerberg suggested AI could even be used to identify terrorists plotting attacks via private channels — likely via analysis of account behavior patterns, according to a source, not by backdooring encryption (the company already uses machine learning for fighting spam and malware on the end-to-end encrypted WhatsApp, for example).
His edited comment on private channels suggests there are metadata-focused alternative techniques that governments could pursue to glean intel from within encrypted apps without needing to demand access to the content itself — albeit, political pressure may well be on the social platforms themselves to be doing the leg work there.
Rudd is clearly pushing internet companies to do more and do it quicker when it comes to removing extremist content. So Zuckerberg’s time frame of a potential AI fix “many years” ahead likely won’t wash. Political time frames tend to be much tighter.
She’s not the only politician stepping up the rhetoric. Social media giants are facing growing pressure in Germany, which earlier this month proposed a new law for social media platforms to deal with hate-speech complaints. The country previously secured agreements from the companies to remove illegal content within 24 hours of a complaint being made, but the government has accused Facebook and Twitter especially of not taking user complaints seriously enough — hence, it says, it’s going down a legislative route now.
A report in The Telegraph last week suggested the U.K. government is also considering a new law to prosecute internet companies if terrorist content is not immediately taken down when reported. Although ministers were apparently questioning how such a law could be enforced when companies are based overseas, as indeed most of the internet companies in question are.
Another possibility: the Home Office was selectively leaking a threat of legislation ahead of today’s meeting, to try to encourage internet companies to come up with alternative fixes.
Yesterday, digital and humans rights groups, including Privacy International, the Open Rights Group, Liberty and Human Rights Watch, called on the U.K. government to be “transparent” and “open” about the discussions it’s having with internet companies. “Private, informal agreements are not consistent with open, democratic governance,” they wrote.
“Government requests directed to tech companies to take down content is de facto state censorship. Some requests may be entirely legitimate but the sheer volumes make us highly concerned about their validity and the accountability of the processes.”
“We need assurances that only illegal material will be sought out by government officials and taken down by tech companies,” they added. “Transparency and judicial oversight are needed over government takedown requests.”
The group also called out Rudd for not publicly referencing existing powers at the government’s disposal, and expressed concern that any “technological limitations to encryption” they seek could have damaging implications for citizens’ “personal security.”
We also note that Ms Rudd may seek to use Technical Capability Notices (TCNs) to enforce changes [to encryption]; and these would require secrecy. We are therefore surprised that public comments by Ms Rudd have not referenced her existing powers.
We do not believe that the TCN process is robust enough in any case, nor that it should be applied to non-UK providers, and are concerned about the precedent that may be set by companies complying with a government over requests like these.
The Home Office did not respond to a request for comment on the group’s open letter, nor respond to specific questions about its discussions today with internet companies, but a government source told us that the meeting is private.
Earlier this week Rudd faced ridicule on social media, and suggestions from tech industry figures that she does not fully understand the workings of the technologies she’s calling out, following comments made during a BBC interview on Sunday — in which she said people in the technology industry understand “the necessary hashtags to stop this stuff even being put up.”
The more likely explanation is that the undoubtedly well-briefed home secretary is playing politics in an attempt to gain an edge with a group of very powerful, overseas-based internet giants.
Update: Following today’s meeting the Home Secretary has put out the following statement:
My starting point is pretty straightforward. I don’t think that people who want to do us harm should be able to use the internet or social media to do so. I want to make sure we are doing everything we can to stop this.
It was a useful discussion and I’m glad to see that progress has been made.
We focused on the issue of access to terrorist propaganda online and the very real and evolving threat it poses.
I said I wanted to see this tackled head-on and I welcome the commitment from the key players to set up a cross-industry forum that will help to do this.
In taking forward this work I’d like to see the industry to go further and faster in not only removing online terrorist content but stopping it going up in the first place. I’d also like to see more support for smaller and emerging platforms to do this as well, so they can no longer be seen as an alternative shop floor by those who want to do us harm.
A Facebook spokesman provided TechCrunch with the following joint letter from the four major Internet companies attending the meeting (emphasis mine):
Dear Home Secretary,
Thank you for the constructive discussion today on the challenges that terrorism poses to us all.
We welcome the opportunity to share with you details of the progress already made in this area and to hear how the UK Government is developing its approach in both the online and offline space. Our companies are committed to making our platforms a hostile space for those who seek to do harm and we have been working on this issue for several years. We share the Government’s commitment to ensuring terrorists do not have a voice online.
We believe that companies, academics, civil society, and government all have an interest and responsibility to respond to the danger of terrorist propaganda online—and as an industry we are committed to doing more. Companies increasingly share best practices with one another, and we have seen that sharing lessons learned across sectors can improve our collective response to this challenge. Each of our companies also commits to urgently improve that collaboration, with appropriate transparency and civil society involvement.
We will look at all options for structuring a forum to accelerate and strengthen this work, ranging from existing international, multilateral organizations, developing dedicated non-governmental organizations, to enhancing and broadening the current informal collaboration sessions that companies already conduct.
We recognize three initial goals for this collaboration:
First, to encourage the further development of technical tools to identify and remove terrorist propaganda. Companies apply unique content policies and have developed – and continue to develop – techniques appropriate for or unique to their own platforms. Nonetheless, there is a significant opportunity to share the knowledge gained in these varied efforts to develop innovative solutions.
Second, to support younger companies that can benefit from the expertise and experiences of more established ones. Working against terrorism is not a competitive issue within the industry and we pledge to engage the wider ecosystem of companies that face these challenges. The British Government can support this work by ensuring the 300 organisations that have a relationship with the Counter-Terrorism Internet Referral Unit are aware of the support available from industry peers and potentially convening those organisations where necessary.
Third, to support the efforts of civil society organisations to promote alternative and counter-narratives. Our companies all have already invested in existing programs to support civil society, but programs like the Civil Society Empowerment Programme highlight the potential benefits of greater collaboration. Again, the industry does not see this work as one where we compete, but rather as an opportunity to provide support whose value is greater than the individual contributions.
As you recognised, this work must be global in nature and must also avoid duplicating existing efforts. The innovative video hash sharing database that is currently operational in a small number of companies is one recent example of successful collaboration. That work has been strengthened by engagement with the European Union, and illustrates the effectiveness of voluntary, collaborative efforts. We anticipate that the next meeting of the EU Internet Forum will be an opportunity to update member states on the progress of both the hash sharing effort and the forum discussed today.
We are grateful for your support in ensuring that the Government and technology industry work together to tackle this vital issue.
Hugh Milward, Senior Director, Corporate, External and Legal Affairs, Microsoft UK
Nick Pickles, UK Head of Public Policy and Government, Twitter
Richard Allan, VP Public Policy EMEA, Facebook
Nicklas Lundblad, VP Public Policy Europe, Middle East, Russia and Africa, Google