Europe’s CSAM scanning plan looks unlawful, per leaked legal advice

A legal opinion on a controversial European Union legislative plan set out last May, when the Commission proposed countering child sexual abuse online by applying obligations on platforms to scan for abuse and grooming, suggests the planned approach is incompatible with existing EU laws that prohibit general and indiscriminate monitoring of people’s communications.

The advice by the Council’s legal service on the proposed Child Sexual Abuse Regulation (also sometimes referred to as “Chat control”), which leaked online this week — and was covered by The Guardian yesterday — finds the regulation as drafted to be on a collision course with fundamental European rights like privacy and data protection; freedom of expression; and the right to respect for a private family life, as critics have warned from the get-go.

The Commission countered these objections by claiming the plan is lawful since it will only apply what they couch as “targeted” and “proportionate” measures to platforms where there is a risk of online child sexual abuse taking place, along with “robust conditions and safeguards”.

The legal opinion essentially blasts that defence to smithereens. It suggests, on the contrary, it’s “highly probably” that a judicial review of the regulation’s detection orders — which require platforms to scan for child sexual abuse material (CSAM) and other related activity (like grooming) — will conclude the screening obligations constitute “general and indiscriminate” monitoring, rather than being targeted (and proportionate), as EU law demands.

On this, the legal advice to the Council points out that the Commission’s claimed “targeting” of orders at risky platforms is not a meaningful limit since it does not entail any targeting of specific users of a given platform, thereby requiring “general screening” of all service users.

The opinion also warns that the net effect of such an approach risks leading to a situation where all comms service providers are made subject to detection orders and forced to scan all their users’ comms — leading to a total surveillance dragnet being applied by national authorities in different Member States essentially “covering all interpersonal communication services active in the Union”.

Or, in other words, the Commission proposal is a charter for mass comms surveillance wrapped in a banner daubed with: ‘But think of the children!’

Here’s more from the document — emphasis ours:

[I]t must be taken into consideration that interpersonal communication services are used by almost the entire population and may also be used for the dissemination of CSAM and/or for solicitation of children. Detection orders addressed to those services would entail a variable but in almost all cases very broad scope of automated analysis of personal data and access to personal and confidential information concerning a very large number of persons that are not involved, even indirectly, in child sexual abuse offences,” the document observes.

This concern is further confirmed by the fact that the proposed Regulation does not provide any substantive safeguards to avoid the risk that the accumulated effect of application of the detection orders by national authorities in different Member States could lead to covering all interpersonal communication services active in the Union.

Furthermore, since issuing a detection order with regard to a specific provider of interpersonal communication services would entail the risk of encouraging the use of other services for child sexual abuse purposes, there is a clear risk that, in order to be effective, detection orders would have to be extended to other providers and lead de facto to a permanent surveillance of all interpersonal communications.”

The lawyers penning the advice suggest, citing relevant case law, that such a broad and unbounded screening obligation would thereby entail “a particularly serious interference with fundamental rights”.

They point to successful legal challenges by digital rights group La Quadrature du Net and others — litigating against governments’ generalized screening and retention of metadata — while pointing out that the level of interference with fundamental rights proposed under the CSAM scanning plan is even greater, given it deals with the screening of communications content, whereas processing metadata is clearly “less intrusive than similar processing of content data”.

Their view is the proposed approach would therefore breach EU data protection law’s proportionality principle and the document goes on to observe: “[I]f the screening of communications metadata was judged by the Court proportionate only for the purpose of safeguarding national security, it is rather unlikely that similar screening of content of communications for the purpose of combating crime of child sexual abuse would be found proportionate, let alone with regard to the conduct not constituting criminal offences.”

The advice also flags a key concern raised by long time critics of the proposal, vis-a-vis the risk mandatory CSAM scanning poses to the use of end-to-end encryption, suggesting detection orders would result in a defacto prohibition on platforms’ use of strong encryption — with associated (further) “strong” interference to fundamental rights like privacy, and to other “legitimate objectives” like data security.

Here’s more on that concern [again with our added emphasis]:

… the screening of content of communications would need to be effective also in an encrypted environment, which is currently widely implemented in the interpersonal communication environment. That would imply that the providers would have to consider (i) abandoning effective end-to-end encryption or (ii) introducing some form of “back-door” to access encrypted content or (iii) accessing the content on the device of the user before it is encrypted (so-called “client-side scanning”).

Therefore, it appears that the generalised screening of content of communications to detect any kind of CSAM would require de facto prohibiting, weakening or otherwise circumventing cybersecurity measures (in particular end-to-end encryption), to make such screening possible. The corresponding impact on cybersecurity measures, in so far as they are provided by economic operators on the market, even under the control of competent authorities, would create a stronger interference with the fundamental rights concerned and could cause an additional interference with other fundamental rights and legitimate objectives such as safeguarding data security.

Another controversial aspect of the Commission proposal requires platforms to scan online comms to try to identify when adults are grooming children. On this, the legal advice assesses that the requirement on platforms to screen audio and written content to try to detect grooming would create additional major interferences with users rights and freedoms that are likely to force platforms to apply age assessment/verification tech to all users.

“In fact, without establishing the precise age of all users, it would not be possible to know that the alleged solicitation is directed towards a child,” the advice suggests. “Such process would have to be done either by (i) mass profiling of the users or by (ii) biometric analysis of the user’s face and/or voice or by (iii) digital identification/certification system. Implementation of any of these measures by the providers of communication services would necessarily add another layer of interference with the rights and freedoms of the users.”

The document evaluates such measures as constituting “very far-reaching” and “serious” interferences it says are “likely to cause the persons concerned to feel that their private lives are the subject of constant surveillance”; further warning that the cumulative impact of detection orders being imposed could entail such generalised access to, and further processing of, people’s comms that “the right to confidentiality of correspondence would become ineffective and devoid of content”. (Or more pithily: RIP privacy.)

The legal opinion is also dismissive of a proviso in the draft regulation which stipulates that any technologies used by services providers “shall not be able to extract any other information from the relevant communications than the information strictly necessary to detect [CSAM]”, and “shall be in accordance with the state of the art in the industry and the least intrusive in terms of the impact on the users’ rights to privacy and family live as well as data protection” — warning that “not extracting irrelevant communication does not exclude, per se, the need to screen, through an automated analysis, all the interpersonal communication data of every user of the specific communication service to which the order is addressed, including to persons with respect to whom there would be no evidence capable of suggesting that their conduct might have a link, even an indirect or remote one, with child sexual abuse offences”.

So, again, the claimed safeguards don’t look very safe atop such intrusive surveillance is the evaluation.

The authors of the advice also highlight the difficulty of assessing the exact impact of the proposal on EU fundamental rights since much has been left up to platforms — including the choice of screening technology they would apply in response to receiving a detection order.

This too is a problematic aspect of the approach, they argue, calling for the legislation to be made more “clear, precise and complete”.

“[T]he requirement of compliance with fundamental rights is not defined in the act itself but is left to a very large extent to the service provider, which remains responsible for the choice of the technology and the consequences linked to its operation,” they write, adding: “[T]he regime of detection orders, as currently provided for by the proposed Regulation, entails the risk of not being sufficiently clear, precise and complete, and therefore of not being in compliance with the requirement that limitations to fundamental rights must be provided for by law.

“The proposed Regulation should provide more detailed elements both on the limits to fundamental rights that the specific type and features of the technology to be used would entail and related possible safeguard measures.”

The Commission was contacted for a response to the legal opinion. A spokesperson declined to comment on leaks. However the Commission spokesperson for home affairs, Anitta Hipper, offered some general remarks on the proposal that’s now with EU co-legislators under negotiation — claiming:

The proposed legislation does not discourage or prevent in any way the use of end-to-end encryption. The proposed Regulation leaves to the provider concerned the choice of the technologies to be operated to comply effectively with detection orders, provided that the technologies meet the requirements of the Regulation. This includes the use of end-to-end encryption technology, which is an important tool to guarantee the security and confidentiality of the communications of users, including those of children.

As per the bloc’s usual lawmaking process the proposal has been handed over to co-legislators in the parliament and Council to try to get it over the line and the draft legislation remains under discussion, as the other EU institutions work out their negotiating positions ahead of talks to push for agreement over a final text. It remains to be seen whether the controversial comms surveillance proposal will be adopted in its current (flawed, as legal experts tell it) form — or whether lawmakers will heed such trenchant critiques and make changes to bring it in line with EU law.

If the proposal isn’t substantially amended, it’s a safe bet it will face legal challenges — and, ultimately, looks likely to be unpicked by the EU’s top court (albeit, that would be several years down the line).

Platforms themselves may also find ways to object — as they have been warning they will if the U.K. presses ahead with its own encryption-threatening online safety legislation.

Pirate Party MEP, Patrick Breyer, shadow rapporteur for his political group in the European parliament’s Civil Liberties Committee (LIBE) — and a long-time opponent of mass surveillance of private communications — seized on the legal opinion to press the case for lawmakers to rethink.

“The EU Council’s services now confirm in crystal clear words what other legal experts, human rights defenders, law enforcement officials, abuse victims and child protection organisations have been warning about for a long time: obliging e-mail, messaging and chat providers to search all private messages for allegedly illegal material and report to the police destroys and violates the right to confidentiality of correspondence,” he said in a statement.

“A flood of mostly false reports would make criminal investigations more difficult, criminalise children en masse and fail to bring the abusers and producers of such material to justice. According to this expertise, searching private communications for potential child sexual exploitation material, known or unknown, is legally feasible only if the search provisions are targeted and limited to persons presumably involved in such criminal activity.

“I call on EU governments to take a U-turn and stop the dystopian China-style chat control plans which they now know violate the fundamental rights of millions of citizens! No one is helping children with a regulation that will inevitably fail before the European Court of Justice. The Swedish government, currently holding the EU Council Presidency, must now immediately remove blanket chat control as well as generalised age verification from the proposed legislation. Governments of Europe, respect our fundamental right to confidential and anonymous correspondence now!”

“I have hopes that the wind may be changing regarding chat control,” Breyer added. “What children really need and want is a safe and empowering design of chat services as well as Europe-wide standards for effective prevention measures, victim support, counselling and criminal investigations.”

For more on the Commission’s CSAM scanning proposal check out our report from last year.

In additional general remarks in support of the proposal, the EU’s home affairs spokeswoman also told us:

The focus should be on finding the most effective solutions rapidly. We cannot afford wasting a second. And numbers are telling: 87 million pictures and videos of child sexual abuse were detected online worldwide last year, up from 85 million the year before.

Detection and reporting of child sexual abuse by Internet companies has already been taking place and be key to start investigations for more than decade. On August 3, 2024, the EU interim regulation that allows service providers to continue voluntary detection and reporting of online child sexual abuse and removal of child sexual abuse material will expire. If this happens, and the current proposal is not adopted, it will be forbidden for tech companies to detect this criminal content in online messages from which a vast majority of the reports originates today. This will make it easier for predators to share child sexual abuse material and groom children in the EU and to get away with it unpunished. The interim regulation was a temporary fix but to fight these crimes we need a permanent solution, built on the Digital Services Act, fully aligned with GDPR.

Commissioner Johansson was in Spain last week, meeting with the Spanish authorities ahead of the incoming Council Presidency. The fight against child sexual abuse remains high on the Council’s agenda. The Commission will continue to work closely with co-legislators on this proposal.

This report was updated with comment from the Commission