Independent legal analysis of a controversial U.K. government proposal to regulate online speech under a safety-focused framework — aka the Online Safety Bill — says the draft bill contains some of the broadest mass surveillance powers over citizens ever proposed in a Western democracy, which it also warns pose a risk to the integrity of end-to-end encryption (E2EE).
The opinion, written by the barrister Matthew Ryder KC of Matrix Chambers, was commissioned by Index on Censorship, a group that campaigns for freedom of expression.
Ryder was asked to consider whether provisions in the bill are compatible with human rights law.
His conclusion is that — as is –– the bill lacks essential safeguards on surveillance powers that mean, without further amendment, it will likely breach the European Convention on Human Rights (ECHR).
Update: Index on Censorship has now published Ryder’s opinion, and its executive summary of concerns, in full — here.
The Online Safety Bill’s progress through parliament was paused over the summer — and again in October — following political turbulence in the governing Conservative Party. After the arrival of a new digital minister, and two changes of prime minister, the government has indicated it intends to make amendments to the draft — however these are focused on provisions related to so-called ‘legal but harmful’ speech, rather than the gaping human rights hole identified by Ryder.
We reached out to the Home Office for a response to the issues raised by his legal opinion.
A government spokesperson replied with an emailed statement, attributed to minister for security Tom Tugendhat, which dismisses any concerns:
The Online Safety Bill has privacy at the heart of its proposals and ensures we’re able to protect ourselves from online crimes including child sexual exploitation. It‘s not a ban on any type of technology or service design.
Where a company fails to tackle child sexual abuse on its platforms, it is right that Ofcom as the independent regulator has the power, as a last resort, to require these companies to take action.
Strong encryption protects our privacy and our online economy but end-to-end encryption can be implemented in a way which is consistent with public safety. The Bill ensures that tech companies do not provide a safe space for the most dangerous predators online.
Ryder’s analysis finds key legal checks are lacking in the bill which grants the state sweeping powers to compel digital providers to surveil users’ online communications “on a generalised and widespread basis” — yet fails to include any form of independent prior authorisation (or independent ex post facto oversight) for the issuing of content scanning notices.
In Ryder’s assessment this lack of rigorous oversight would likely breach Articles 8 (right to privacy) and 10 (right to freedom of expression) of the ECHR.
Existing very broad surveillance powers granted to U.K. security services, under the (also highly controversial) Investigatory Powers Act 2016 (IPA), do contain legal checks and balances for authorizing the most intrusive powers — involving the judiciary in signing off intercept warrants.
But the Online Safety Bill leaves it up to the designated internet regulator to make decisions to issue the most intrusive content scanning orders — a public body that Ryder argues is not adequately independent for this function.
“The statutory scheme does not make provision for independent authorisation for 104 Notices even though it may require private bodies – at the behest of a public authority – to carry out mass state surveillance of millions of user’s communications. Nor is there any provision for ex post facto independent oversight,” he writes. “Ofcom, the state regulator, cannot in our opinion, be regarded as an independent body in this context.”
He also points out that given existing broad surveillance powers under the IPA, the “mass surveillance” of online comms proposed in the Online Safety Bill may not meet another key human rights test — of being “necessary in a democratic society.”
While bulk surveillance powers under the IPA must be linked to a national security concern — and cannot be used solely for the prevention and detection of serious crime between U.K. users — yet the Online Safety Bill, which his legal analysis argues grants similar “mass surveillance” powers to Ofcom, covers a much broader range of content than pure national security issues. So it looks far less bounded.
Commenting on Ryder’s legal opinion in a statement, Index on Censorship’s chief executive, Ruth Smeeth, denounced the bill’s overreach — writing:
This legal opinion makes clear the myriad issues surrounding the Online Safety Bill. The vague drafting of this legislation will necessitate Ofcom, a media regulator, unilaterally deciding how to deploy massive powers of surveillance across almost every aspect of digital day-to-day life in Britain. Surveillance by regulator is perhaps the most egregious instance of overreach in a Bill that is simply unfit for purpose.
Impact on E2EE
While much of the controversy attached to the Online Safety Bill — which was published in draft last year but has continued being amended and expanded in scope by government — has focused on risks to freedom of expression, there are a range of other notable concerns. Including how content scanning provisions in the legislation could impact E2EE, with critics like the Open Rights Group warning the law will essentially strong-arm service providers into breaking strong encryption.
Concerns have stepped up since the bill was introduced after a government amendment this July — which proposed new powers for Ofcom to force messaging platforms to implement content-scanning technologies even if comms are strongly encrypted on their service. The amendment stipulated that a regulated service could be required to use “best endeavours” to develop or source technology for detecting and removing CSEA in private comms — and private comms puts it on a collision course with E2EE.
E2EE remains the ‘gold standard’ for encryption and online security — and is found on mainstream messaging platforms like WhatsApp, iMessage and Signal, to name a few — providing essential security and privacy for users’ online comms.
So any laws that threaten use of this standard — or open up new vulnerabilities for E2EE — could have a massive impact on web users’ security globally.
In the legal opinion, Ryder focuses most of his attention on the Online Safety Bill’s content scanning provisions — which are creating this existential risk for E2EE.
The bulk of his legal analysis centers on Clause 104 of the bill — which grants the designated internet watchdog (existing media and comms regulator, Ofcom) a new power to issue notices to in-scope service providers requiring them to identify and take down terrorism content that’s communicated “publicly” by means of their services or Child Sex Exploitation and Abuse (CSEA) content being communicated “publicly or privately.” And, again, the inclusion of “private” comms is where things look really sticky for E2EE.
Ryder takes the view that the bill, rather than forcing messaging platforms to abandon E2EE altogether, will push them toward deploying a controversial technology called client-side scanning (CSS) — as a way to comply with 104 Notices issued by Ofcom — predicting that’s “likely to be the primary technology whose use is mandated.”
“Clause 104 does not refer to CSS (or any technology) by name. It mentions only ‘accredited technology.’ However, the practical implementation of 104 Notices requiring the identification, removal and/or blocking of content leads almost inevitably to the concern that this power will be used by Ofcom to mandate CSPs [communications service providers] using some form of CSS,” he writes, adding: “The Bill notes that the accredited technology referred to c.104 is a form of ‘content moderation technology,’ meaning ‘technology, such as algorithms, keyword matching, image matching or image classification, which [ … ] analyses relevant content’ (c.187(2)(11). This description corresponds with CSS.”
He also points to an article published by two senior GCHQ officials this summer — which he says “endorsed CSS as a potential solution to the problem of CSEA content being transmitted on encrypted platforms” — further noting that out their comments were made “against the backdrop of the ongoing debate about the OLSB [Online Safety Bill].”
“Any attempt to require CSPs to undermine their implementation of end-to-end encryption generally, would have far-reaching implications for the safety and security of all global on-line of communications. We are unable to envisage circumstances where such a destructive step in the security of global online communications for billions of users could be justified,” he goes on to warn.
Client-side scanning risk
CSS refers to controversial scanning technology in which the content of encrypted communications is scanned with the goal of identifying objectionable content. The process entails a message being converted to a cryptographic digital fingerprint prior to it being encrypted and sent, with this fingerprint then compared with a database of fingerprints to check for any matches with known objectionable content (such as CSEA). The comparison of these cryptographic fingerprints can take place either on the user’s own device — or on a remote service.
Wherever the comparison takes place, privacy and security experts argue that CSS breaks the E2E trust model since it fundamentally defeats the ‘zero knowledge’ purpose of end-to-end encryption and generates new risks by opening up novel attack and/or censorship vectors.
For example they point to the prospect of embedded content-scanning infrastructure enabling ‘censorship creep’ as a state could mandate comms providers scan for an increasingly broad range of ‘objectionable’ content (from copyrighted material all the way up to expressions of political dissent that are displeasing to an autocratic regime, since tools developed within a democratic system aren’t likely to be applied in only one place in the world).
An attempt by Apple to deploy CSS last year on iOS users’ devices — when it announced it would begin scanning iCloud Photo uploads for known child abuse imagery — led to a huge backlash from privacy and security experts. Apple first paused — and then quietly dropped reference to the plan in December, so it appears to have abandoned the idea. However governments could revive such moves by mandating deployment of CSS via laws like the U.K.’s Online Safety Bill which relies on the same claimed child safety justification to embed and enforce content scanning on platforms.
Notably, the U.K. Home Office has been actively supporting development of content-scanning technologies which could be applied to E2EE services — announcing a “Tech Safety Challenge Fund” last year to splash taxpayer cash on the development of what it billed at the time as “innovative technology to keep children safe in environments such as online messaging platforms with end-to-end encryption.”
Last November, five winning projects were announced as part of that challenge. It’s not clear how ‘developed’ — and/or accurate — these prototypes are. But the government is moving ahead with Online Safety legislation that this legal expert suggests will, de facto, require E2EE platforms to carry out content scanning and drive uptake of CSS — regardless of the state of development of such tech.
Discussing the government’s proposed amendment to Clause 104 — which envisages Ofcom being able to require comms service providers to ‘use best endeavours’ to develop or source their own content-scanning technology to achieve the same purposes as accredited technology which the bill also envisages the regulator signing off — Ryder predicts: “It seems likely that any such solution would be CSS or something akin to it. We think it is highly unlikely that CSPs would instead, for example, attempt to remove all end-to-end encryption on their services. Doing so would not remove the need for them to analyse the content of communications to identify relevant content. More importantly, however, this would fatally compromise security for their users and on their platforms, almost certainly causing many users to switch to other services.”
“[I]f 104 Notices were issued across all eligible platforms, this would mean that the content of an almost all internet-based communications by millions of people — including the details of their personal conversations — would be constantly surveilled by service providers. Whether this happens will, of course, depend on how Ofcom exercises its power to issue 104 Notices but the inherent tension between the apparent aim, and the need for proportionate use is self-evident,” he adds.
Failure to comply with the Online Safety Bill will put service providers at risk of a range of severe penalties — so very large sticks are being assembled and put in place alongside sweeping surveillance powers to force compliance.
The draft legislation allowing for fines of up to 10% of global annual turnover (or £18 million, whichever is higher). The bill would also enable Ofcom to be able to apply to court for “business disruption measures” — including blocking non-compliant services within the U,K, market. While senior execs at providers who fail to cooperate with the regulator could risk criminal prosecution.
For its part, the U.K. government has — so far — been dismissive of concerns about the impact of the legislation on E2EE.
In a section on “private messaging platforms,” a government fact sheet claims content-scanning technology would only be mandated by Ofcom “as a last resort.” The same text also suggests these scanning technologies will be “highly accurate” — without providing any evidence in support of the assertion. And it writes that “use of this power will be subject to strict safeguards to protect users’ privacy,” adding: “Highly accurate automated tools will ensure that legal content is not affected. To use this power, Ofcom must be certain that no other measures would be similarly effective and there is evidence of a widespread problem on a service.”
The notion that novel AI will be “highly accurate” for a wide-ranging content-scanning purpose at scale is obviously questionable — and demands robust evidence to back it up.
You only need consider how blunt a tool AI has proven to be for content moderation on mainstream platforms, hence the thousands of human contractors still employed reviewing automated reports. So it seems highly fanciful that the Home Office has or will be able to foster development of a far more effective AI filter than tech giants like Google and Facebook have managed to devise over the past decades.
As for limits on use of content-scanning notices, Ryder’s opinion touches on safeguards contained in Clause 105 of the bill — but he questions whether these are sufficient to address the full sweep of human rights concerns attached to such a potent power.
“Other safeguards exist in Clause 105 of the OLSB but whether those additional safeguards will be sufficient will depend on how they are applied in practice,” he suggests. “There is currently no indication as to how Ofcom will apply those safeguards and limit the scope of 104 Notices.
“For example, Clause 105(h) alludes to Article 10 of the ECHR, by requiring appropriate consideration to be given to interference with the right to freedom of expression. But there is no specific provision ensuring the adequate protection of journalistic sources, which will need to be provided in order to prevent a breach of Article 10.”
In further remarks responding to Ryder’s opinion, the Home Office emphasized that Section 104 Notice powers will only be used where there is no alternative, less intrusive measures capable of achieving the necessary reduction in illegal CSEA (and/or terrorism content) appearing on the service — adding that it will be up to the regulator to assess whether issuing a notice is necessary and proportionate, taking into account matters set out in the legislation including the risk of harm occurring on a service, as well as the prevalence of harm.