Government & Policy

Surveillance powers in UK’s Online Safety Bill are risk to E2EE, warns legal expert

Comment

Image Credits: Harshil Shah (opens in a new window) / Flickr (opens in a new window) under a CC BY-ND 2.0 (opens in a new window) license.

Independent legal analysis of a controversial U.K. government proposal to regulate online speech under a safety-focused framework — aka the Online Safety Bill — says the draft bill contains some of the broadest mass surveillance powers over citizens ever proposed in a Western democracy, which it also warns pose a risk to the integrity of end-to-end encryption (E2EE).

The opinion, written by the barrister Matthew Ryder KC of Matrix Chambers, was commissioned by Index on Censorship, a group that campaigns for freedom of expression.

Ryder was asked to consider whether provisions in the bill are compatible with human rights law.

His conclusion is that — as is –– the bill lacks essential safeguards on surveillance powers that mean, without further amendment, it will likely breach the European Convention on Human Rights (ECHR).

Update: Index on Censorship has now published Ryder’s opinion, and its executive summary of concerns, in full — here.

The Online Safety Bill’s progress through parliament was paused over the summer — and again in October — following political turbulence in the governing Conservative Party. After the arrival of a new digital minister, and two changes of prime minister, the government has indicated it intends to make amendments to the draft — however these are focused on provisions related to so-called ‘legal but harmful’ speech, rather than the gaping human rights hole identified by Ryder.

UK to change Online Safety Bill limits on ‘legal but harmful’ content for adults

We reached out to the Home Office for a response to the issues raised by his legal opinion.

A government spokesperson replied with an emailed statement, attributed to minister for security Tom Tugendhat, which dismisses any concerns:

The Online Safety Bill has privacy at the heart of its proposals and ensures we’re able to protect ourselves from online crimes including child sexual exploitation. It‘s not a ban on any type of technology or service design.

Where a company fails to tackle child sexual abuse on its platforms, it is right that Ofcom as the independent regulator has the power, as a last resort, to require these companies to take action.

Strong encryption protects our privacy and our online economy but end-to-end encryption can be implemented in a way which is consistent with public safety. The Bill ensures that tech companies do not provide a safe space for the most dangerous predators online.

Ryder’s analysis finds key legal checks are lacking in the bill which grants the state sweeping powers to compel digital providers to surveil users’ online communications “on a generalised and widespread basis” — yet fails to include any form of independent prior authorisation (or independent ex post facto oversight) for the issuing of content scanning notices.

In Ryder’s assessment this lack of rigorous oversight would likely breach Articles 8 (right to privacy) and 10 (right to freedom of expression) of the ECHR.

Existing very broad surveillance powers granted to U.K. security services, under the (also highly controversial) Investigatory Powers Act 2016 (IPA), do contain legal checks and balances for authorizing the most intrusive powers — involving the judiciary in signing off intercept warrants.

But the Online Safety Bill leaves it up to the designated internet regulator to make decisions to issue the most intrusive content scanning orders — a public body that Ryder argues is not adequately independent for this function.

“The statutory scheme does not make provision for independent authorisation for 104 Notices even though it may require private bodies – at the behest of a public authority – to carry out mass state surveillance of millions of user’s communications. Nor is there any provision for ex post facto independent oversight,” he writes. “Ofcom, the state regulator, cannot in our opinion, be regarded as an independent body in this context.”

He also points out that given existing broad surveillance powers under the IPA, the “mass surveillance” of online comms proposed in the Online Safety Bill may not meet another key human rights test — of being “necessary in a democratic society.”

While bulk surveillance powers under the IPA must be linked to a national security concern — and cannot be used solely for the prevention and detection of serious crime between U.K. users — yet the Online Safety Bill, which his legal analysis argues grants similar “mass surveillance” powers to Ofcom, covers a much broader range of content than pure national security issues. So it looks far less bounded. 

Commenting on Ryder’s legal opinion in a statement, Index on Censorship’s chief executive, Ruth Smeeth, denounced the bill’s overreach — writing:

This legal opinion makes clear the myriad issues surrounding the Online Safety Bill. The vague drafting of this legislation will necessitate Ofcom, a media regulator, unilaterally deciding how to deploy massive powers of surveillance across almost every aspect of digital day-to-day life in Britain. Surveillance by regulator is perhaps the most egregious instance of overreach in a Bill that is simply unfit for purpose.

Impact on E2EE

While much of the controversy attached to the Online Safety Bill — which was published in draft last year but has continued being amended and expanded in scope by government — has focused on risks to freedom of expression, there are a range of other notable concerns. Including how content scanning provisions in the legislation could impact E2EE, with critics like the Open Rights Group warning the law will essentially strong-arm service providers into breaking strong encryption.

Concerns have stepped up since the bill was introduced after a government amendment this July — which proposed new powers for Ofcom to force messaging platforms to implement content-scanning technologies even if comms are strongly encrypted on their service. The amendment stipulated that a regulated service could be required to use “best endeavours” to develop or source technology for detecting and removing CSEA in private comms — and private comms puts it on a collision course with E2EE.

UK could force E2E encrypted platforms to do CSAM-scanning

E2EE remains the ‘gold standard’ for encryption and online security — and is found on mainstream messaging platforms like WhatsApp, iMessage and Signal, to name a few — providing essential security and privacy for users’ online comms.

So any laws that threaten use of this standard — or open up new vulnerabilities for E2EE — could have a massive impact on web users’ security globally.

In the legal opinion, Ryder focuses most of his attention on the Online Safety Bill’s content scanning provisions — which are creating this existential risk for E2EE.

The bulk of his legal analysis centers on Clause 104 of the bill — which grants the designated internet watchdog (existing media and comms regulator, Ofcom) a new power to issue notices to in-scope service providers requiring them to identify and take down terrorism content that’s communicated “publicly” by means of their services or Child Sex Exploitation and Abuse (CSEA) content being communicated “publicly or privately.” And, again, the inclusion of “private” comms is where things look really sticky for E2EE.

Ryder takes the view that the bill, rather than forcing messaging platforms to abandon E2EE altogether, will push them toward deploying a controversial technology called client-side scanning (CSS) — as a way to comply with 104 Notices issued by Ofcom — predicting that’s “likely to be the primary technology whose use is mandated.”

Clause 104 does not refer to CSS (or any technology) by name. It mentions only ‘accredited technology.’ However, the practical implementation of 104 Notices requiring the identification, removal and/or blocking of content leads almost inevitably to the concern that this power will be used by Ofcom to mandate CSPs [communications service providers] using some form of CSS,” he writes, adding: “The Bill notes that the accredited technology referred to c.104 is a form of ‘content moderation technology,’ meaning ‘technology, such as algorithms, keyword matching, image matching or image classification, which [ … ] analyses relevant content’ (c.187(2)(11). This description corresponds with CSS.”

He also points to an article published by two senior GCHQ officials this summer — which he says “endorsed CSS as a potential solution to the problem of CSEA content being transmitted on encrypted platforms” — further noting that out their comments were made “against the backdrop of the ongoing debate about the OLSB [Online Safety Bill].”

Any attempt to require CSPs to undermine their implementation of end-to-end encryption generally, would have far-reaching implications for the safety and security of all global on-line of communications. We are unable to envisage circumstances where such a destructive step in the security of global online communications for billions of users could be justified,” he goes on to warn.

Client-side scanning risk

CSS refers to controversial scanning technology in which the content of encrypted communications is scanned with the goal of identifying objectionable content. The process entails a message being converted to a cryptographic digital fingerprint prior to it being encrypted and sent, with this fingerprint then compared with a database of fingerprints to check for any matches with known objectionable content (such as CSEA). The comparison of these cryptographic fingerprints can take place either on the user’s own device — or on a remote service.

Wherever the comparison takes place, privacy and security experts argue that CSS breaks the E2E trust model since it fundamentally defeats the ‘zero knowledge’ purpose of end-to-end encryption and generates new risks by opening up novel attack and/or censorship vectors.

For example they point to the prospect of embedded content-scanning infrastructure enabling ‘censorship creep’ as a state could mandate comms providers scan for an increasingly broad range of ‘objectionable’ content (from copyrighted material all the way up to expressions of political dissent that are displeasing to an autocratic regime, since tools developed within a democratic system aren’t likely to be applied in only one place in the world).

An attempt by Apple to deploy CSS last year on iOS users’ devices — when it announced it would begin scanning iCloud Photo uploads for known child abuse imagery — led to a huge backlash from privacy and security experts. Apple first paused — and then quietly dropped reference to the plan in December, so it appears to have abandoned the idea. However governments could revive such moves by mandating deployment of CSS via laws like the U.K.’s Online Safety Bill which relies on the same claimed child safety justification to embed and enforce content scanning on platforms.

Notably, the U.K. Home Office has been actively supporting development of content-scanning technologies which could be applied to E2EE services — announcing a “Tech Safety Challenge Fund” last year to splash taxpayer cash on the development of what it billed at the time as “innovative technology to keep children safe in environments such as online messaging platforms with end-to-end encryption.”

Last November, five winning projects were announced as part of that challenge. It’s not clear how ‘developed’ — and/or accurate — these prototypes are. But the government is moving ahead with Online Safety legislation that this legal expert suggests will, de facto, require E2EE platforms to carry out content scanning and drive uptake of CSS — regardless of the state of development of such tech.

UK names five projects to get funding for CSAM detection

Discussing the government’s proposed amendment to Clause 104 — which envisages Ofcom being able to require comms service providers to ‘use best endeavours’ to develop or source their own content-scanning technology to achieve the same purposes as accredited technology which the bill also envisages the regulator signing off — Ryder predicts: It seems likely that any such solution would be CSS or something akin to it. We think it is highly unlikely that CSPs would instead, for example, attempt to remove all end-to-end encryption on their services. Doing so would not remove the need for them to analyse the content of communications to identify relevant content. More importantly, however, this would fatally compromise security for their users and on their platforms, almost certainly causing many users to switch to other services.”

“[I]f 104 Notices were issued across all eligible platforms, this would mean that the content of an almost all internet-based communications by millions of people — including the details of their personal conversations — would be constantly surveilled by service providers. Whether this happens will, of course, depend on how Ofcom exercises its power to issue 104 Notices but the inherent tension between the apparent aim, and the need for proportionate use is self-evident,” he adds. 

Failure to comply with the Online Safety Bill will put service providers at risk of a range of severe penalties — so very large sticks are being assembled and put in place alongside sweeping surveillance powers to force compliance.

The draft legislation allowing for fines of up to 10% of global annual turnover (or £18 million, whichever is higher). The bill would also enable Ofcom to be able to apply to court for “business disruption measures” — including blocking non-compliant services within the U,K, market. While senior execs at providers who fail to cooperate with the regulator could risk criminal prosecution.

For its part, the U.K. government has — so far — been dismissive of concerns about the impact of the legislation on E2EE.

In a section on “private messaging platforms,” a government fact sheet claims content-scanning technology would only be mandated by Ofcom “as a last resort.” The same text also suggests these scanning technologies will be “highly accurate” — without providing any evidence in support of the assertion. And it writes that “use of this power will be subject to strict safeguards to protect users’ privacy,” adding: “Highly accurate automated tools will ensure that legal content is not affected. To use this power, Ofcom must be certain that no other measures would be similarly effective and there is evidence of a widespread problem on a service.”

The notion that novel AI will be “highly accurate” for a wide-ranging content-scanning purpose at scale is obviously questionable — and demands robust evidence to back it up.

You only need consider how blunt a tool AI has proven to be for content moderation on mainstream platforms, hence the thousands of human contractors still employed reviewing automated reports. So it seems highly fanciful that the Home Office has or will be able to foster development of a far more effective AI filter than tech giants like Google and Facebook have managed to devise over the past decades.

As for limits on use of content-scanning notices, Ryder’s opinion touches on safeguards contained in Clause 105 of the bill — but he questions whether these are sufficient to address the full sweep of human rights concerns attached to such a potent power.

“Other safeguards exist in Clause 105 of the OLSB but whether those additional safeguards will be sufficient will depend on how they are applied in practice,” he suggests. “There is currently no indication as to how Ofcom will apply those safeguards and limit the scope of 104 Notices.

“For example, Clause 105(h) alludes to Article 10 of the ECHR, by requiring appropriate consideration to be given to interference with the right to freedom of expression. But there is no specific provision ensuring the adequate protection of journalistic sources, which will need to be provided in order to prevent a breach of Article 10.”

In further remarks responding to Ryder’s opinion, the Home Office emphasized that Section 104 Notice powers will only be used where there is no alternative, less intrusive measures capable of achieving the necessary reduction in illegal CSEA (and/or terrorism content) appearing on the service — adding that it will be up to the regulator to assess whether issuing a notice is necessary and proportionate, taking into account matters set out in the legislation including the risk of harm occurring on a service, as well as the prevalence of harm.

UK government denies fresh delay to Online Safety Bill will derail it

More TechCrunch

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android

A hacker listed the data allegedly breached from Samco on a known cybercrime forum.

Hacker claims theft of India’s Samco account data

A top European privacy watchdog is investigating following the recent breaches of Dell customers’ personal information, TechCrunch has learned.  Ireland’s Data Protection Commission (DPC) deputy commissioner Graham Doyle confirmed to…

Ireland privacy watchdog confirms Dell data breach investigation

Ampere and Qualcomm aren’t the most obvious of partners. Both, after all, offer Arm-based chips for running data center servers (though Qualcomm’s largest market remains mobile). But as the two…

Ampere teams up with Qualcomm to launch an Arm-based AI server

At Google’s I/O developer conference, the company made its case to developers — and to some extent, consumers — why its bets on AI are ahead of rivals. At the…

Google I/O was an AI evolution, not a revolution

TechCrunch Disrupt has always been the ultimate convergence point for all things startup and tech. In the bustling world of innovation, it serves as the “big top” tent, where entrepreneurs,…

Meet the Magnificent Six: A tour of the stages at Disrupt 2024

There’s apparently a lot of demand for an on-demand handyperson. Khosla Ventures and Pear VC have just tripled down on their investment in Honey Homes, which offers up a dedicated…

Khosla Ventures, Pear VC triple down on Honey Homes, a smart way to hire a handyman

TikTok is testing the ability for users to upload 60-minute videos, the company confirmed to TechCrunch on Thursday. The feature is available to a limited group of users in select…

TikTok tests 60-minute video uploads as it continues to take on YouTube

Flock Safety is a multibillion-dollar startup that’s got eyes everywhere. As of Wednesday, with the company’s new Solar Condor cameras, those eyes are solar-powered and use wireless 5G networks to…

Flock Safety’s solar-powered cameras could make surveillance more widespread

Since he was very young, Bar Mor knew that he would inevitably do something with real estate. His family was involved in all types of real estate projects, from ground-up…

Agora raises $34M Series B to keep building the Carta for real estate

Poshmark, the social commerce site that lets people buy and sell new and used items to each other, launched a paid marketing tool on Thursday, giving sellers the ability to…

Poshmark’s ‘Promoted Closet’ tool lets sellers boost all their listings at once

Google is launching a Gemini add-on for educational institutes through Google Workspace.

Google adds Gemini to its Education suite

More money for the generative AI boom: Y Combinator-backed developer infrastructure startup Recall.ai announced Thursday it has raised a $10 million Series A funding round, bringing its total raised to over…

YC-backed Recall.ai gets $10M Series A to help companies use virtual meeting data