Privacy

Europe’s CSAM scanning plan looks unlawful, per leaked legal advice

Comment

Futuristic chat interface with bot talking to a human.
Image Credits: Kenstocker / Getty Images

A legal opinion on a controversial European Union legislative plan set out last May, when the Commission proposed countering child sexual abuse online by applying obligations on platforms to scan for abuse and grooming, suggests the planned approach is incompatible with existing EU laws that prohibit general and indiscriminate monitoring of people’s communications.

The advice by the Council’s legal service on the proposed Child Sexual Abuse Regulation (also sometimes referred to as “Chat control”), which leaked online this week — and was covered by The Guardian yesterday — finds the regulation as drafted to be on a collision course with fundamental European rights like privacy and data protection; freedom of expression; and the right to respect for a private family life, as critics have warned from the get-go.

The Commission countered these objections by claiming the plan is lawful since it will only apply what they couch as “targeted” and “proportionate” measures to platforms where there is a risk of online child sexual abuse taking place, along with “robust conditions and safeguards”.

The legal opinion essentially blasts that defence to smithereens. It suggests, on the contrary, it’s “highly probably” that a judicial review of the regulation’s detection orders — which require platforms to scan for child sexual abuse material (CSAM) and other related activity (like grooming) — will conclude the screening obligations constitute “general and indiscriminate” monitoring, rather than being targeted (and proportionate), as EU law demands.

On this, the legal advice to the Council points out that the Commission’s claimed “targeting” of orders at risky platforms is not a meaningful limit since it does not entail any targeting of specific users of a given platform, thereby requiring “general screening” of all service users.

The opinion also warns that the net effect of such an approach risks leading to a situation where all comms service providers are made subject to detection orders and forced to scan all their users’ comms — leading to a total surveillance dragnet being applied by national authorities in different Member States essentially “covering all interpersonal communication services active in the Union”.

Or, in other words, the Commission proposal is a charter for mass comms surveillance wrapped in a banner daubed with: ‘But think of the children!’

Here’s more from the document — emphasis ours:

[I]t must be taken into consideration that interpersonal communication services are used by almost the entire population and may also be used for the dissemination of CSAM and/or for solicitation of children. Detection orders addressed to those services would entail a variable but in almost all cases very broad scope of automated analysis of personal data and access to personal and confidential information concerning a very large number of persons that are not involved, even indirectly, in child sexual abuse offences,” the document observes.

This concern is further confirmed by the fact that the proposed Regulation does not provide any substantive safeguards to avoid the risk that the accumulated effect of application of the detection orders by national authorities in different Member States could lead to covering all interpersonal communication services active in the Union.

Furthermore, since issuing a detection order with regard to a specific provider of interpersonal communication services would entail the risk of encouraging the use of other services for child sexual abuse purposes, there is a clear risk that, in order to be effective, detection orders would have to be extended to other providers and lead de facto to a permanent surveillance of all interpersonal communications.”

The lawyers penning the advice suggest, citing relevant case law, that such a broad and unbounded screening obligation would thereby entail “a particularly serious interference with fundamental rights”.

They point to successful legal challenges by digital rights group La Quadrature du Net and others — litigating against governments’ generalized screening and retention of metadata — while pointing out that the level of interference with fundamental rights proposed under the CSAM scanning plan is even greater, given it deals with the screening of communications content, whereas processing metadata is clearly “less intrusive than similar processing of content data”.

Their view is the proposed approach would therefore breach EU data protection law’s proportionality principle and the document goes on to observe: “[I]f the screening of communications metadata was judged by the Court proportionate only for the purpose of safeguarding national security, it is rather unlikely that similar screening of content of communications for the purpose of combating crime of child sexual abuse would be found proportionate, let alone with regard to the conduct not constituting criminal offences.”

The advice also flags a key concern raised by long time critics of the proposal, vis-a-vis the risk mandatory CSAM scanning poses to the use of end-to-end encryption, suggesting detection orders would result in a defacto prohibition on platforms’ use of strong encryption — with associated (further) “strong” interference to fundamental rights like privacy, and to other “legitimate objectives” like data security.

Here’s more on that concern [again with our added emphasis]:

… the screening of content of communications would need to be effective also in an encrypted environment, which is currently widely implemented in the interpersonal communication environment. That would imply that the providers would have to consider (i) abandoning effective end-to-end encryption or (ii) introducing some form of “back-door” to access encrypted content or (iii) accessing the content on the device of the user before it is encrypted (so-called “client-side scanning”).

Therefore, it appears that the generalised screening of content of communications to detect any kind of CSAM would require de facto prohibiting, weakening or otherwise circumventing cybersecurity measures (in particular end-to-end encryption), to make such screening possible. The corresponding impact on cybersecurity measures, in so far as they are provided by economic operators on the market, even under the control of competent authorities, would create a stronger interference with the fundamental rights concerned and could cause an additional interference with other fundamental rights and legitimate objectives such as safeguarding data security.

Another controversial aspect of the Commission proposal requires platforms to scan online comms to try to identify when adults are grooming children. On this, the legal advice assesses that the requirement on platforms to screen audio and written content to try to detect grooming would create additional major interferences with users rights and freedoms that are likely to force platforms to apply age assessment/verification tech to all users.

“In fact, without establishing the precise age of all users, it would not be possible to know that the alleged solicitation is directed towards a child,” the advice suggests. “Such process would have to be done either by (i) mass profiling of the users or by (ii) biometric analysis of the user’s face and/or voice or by (iii) digital identification/certification system. Implementation of any of these measures by the providers of communication services would necessarily add another layer of interference with the rights and freedoms of the users.”

The document evaluates such measures as constituting “very far-reaching” and “serious” interferences it says are “likely to cause the persons concerned to feel that their private lives are the subject of constant surveillance”; further warning that the cumulative impact of detection orders being imposed could entail such generalised access to, and further processing of, people’s comms that “the right to confidentiality of correspondence would become ineffective and devoid of content”. (Or more pithily: RIP privacy.)

The legal opinion is also dismissive of a proviso in the draft regulation which stipulates that any technologies used by services providers “shall not be able to extract any other information from the relevant communications than the information strictly necessary to detect [CSAM]”, and “shall be in accordance with the state of the art in the industry and the least intrusive in terms of the impact on the users’ rights to privacy and family live as well as data protection” — warning that “not extracting irrelevant communication does not exclude, per se, the need to screen, through an automated analysis, all the interpersonal communication data of every user of the specific communication service to which the order is addressed, including to persons with respect to whom there would be no evidence capable of suggesting that their conduct might have a link, even an indirect or remote one, with child sexual abuse offences”.

So, again, the claimed safeguards don’t look very safe atop such intrusive surveillance is the evaluation.

The authors of the advice also highlight the difficulty of assessing the exact impact of the proposal on EU fundamental rights since much has been left up to platforms — including the choice of screening technology they would apply in response to receiving a detection order.

This too is a problematic aspect of the approach, they argue, calling for the legislation to be made more “clear, precise and complete”.

“[T]he requirement of compliance with fundamental rights is not defined in the act itself but is left to a very large extent to the service provider, which remains responsible for the choice of the technology and the consequences linked to its operation,” they write, adding: “[T]he regime of detection orders, as currently provided for by the proposed Regulation, entails the risk of not being sufficiently clear, precise and complete, and therefore of not being in compliance with the requirement that limitations to fundamental rights must be provided for by law.

“The proposed Regulation should provide more detailed elements both on the limits to fundamental rights that the specific type and features of the technology to be used would entail and related possible safeguard measures.”

The Commission was contacted for a response to the legal opinion. A spokesperson declined to comment on leaks. However the Commission spokesperson for home affairs, Anitta Hipper, offered some general remarks on the proposal that’s now with EU co-legislators under negotiation — claiming:

The proposed legislation does not discourage or prevent in any way the use of end-to-end encryption. The proposed Regulation leaves to the provider concerned the choice of the technologies to be operated to comply effectively with detection orders, provided that the technologies meet the requirements of the Regulation. This includes the use of end-to-end encryption technology, which is an important tool to guarantee the security and confidentiality of the communications of users, including those of children.

As per the bloc’s usual lawmaking process the proposal has been handed over to co-legislators in the parliament and Council to try to get it over the line and the draft legislation remains under discussion, as the other EU institutions work out their negotiating positions ahead of talks to push for agreement over a final text. It remains to be seen whether the controversial comms surveillance proposal will be adopted in its current (flawed, as legal experts tell it) form — or whether lawmakers will heed such trenchant critiques and make changes to bring it in line with EU law.

If the proposal isn’t substantially amended, it’s a safe bet it will face legal challenges — and, ultimately, looks likely to be unpicked by the EU’s top court (albeit, that would be several years down the line).

Platforms themselves may also find ways to object — as they have been warning they will if the U.K. presses ahead with its own encryption-threatening online safety legislation.

Pirate Party MEP, Patrick Breyer, shadow rapporteur for his political group in the European parliament’s Civil Liberties Committee (LIBE) — and a long-time opponent of mass surveillance of private communications — seized on the legal opinion to press the case for lawmakers to rethink.

“The EU Council’s services now confirm in crystal clear words what other legal experts, human rights defenders, law enforcement officials, abuse victims and child protection organisations have been warning about for a long time: obliging e-mail, messaging and chat providers to search all private messages for allegedly illegal material and report to the police destroys and violates the right to confidentiality of correspondence,” he said in a statement.

“A flood of mostly false reports would make criminal investigations more difficult, criminalise children en masse and fail to bring the abusers and producers of such material to justice. According to this expertise, searching private communications for potential child sexual exploitation material, known or unknown, is legally feasible only if the search provisions are targeted and limited to persons presumably involved in such criminal activity.

“I call on EU governments to take a U-turn and stop the dystopian China-style chat control plans which they now know violate the fundamental rights of millions of citizens! No one is helping children with a regulation that will inevitably fail before the European Court of Justice. The Swedish government, currently holding the EU Council Presidency, must now immediately remove blanket chat control as well as generalised age verification from the proposed legislation. Governments of Europe, respect our fundamental right to confidential and anonymous correspondence now!”

“I have hopes that the wind may be changing regarding chat control,” Breyer added. “What children really need and want is a safe and empowering design of chat services as well as Europe-wide standards for effective prevention measures, victim support, counselling and criminal investigations.”

For more on the Commission’s CSAM scanning proposal check out our report from last year.

In additional general remarks in support of the proposal, the EU’s home affairs spokeswoman also told us:

The focus should be on finding the most effective solutions rapidly. We cannot afford wasting a second. And numbers are telling: 87 million pictures and videos of child sexual abuse were detected online worldwide last year, up from 85 million the year before.

Detection and reporting of child sexual abuse by Internet companies has already been taking place and be key to start investigations for more than decade. On August 3, 2024, the EU interim regulation that allows service providers to continue voluntary detection and reporting of online child sexual abuse and removal of child sexual abuse material will expire. If this happens, and the current proposal is not adopted, it will be forbidden for tech companies to detect this criminal content in online messages from which a vast majority of the reports originates today. This will make it easier for predators to share child sexual abuse material and groom children in the EU and to get away with it unpunished. The interim regulation was a temporary fix but to fight these crimes we need a permanent solution, built on the Digital Services Act, fully aligned with GDPR.

Commissioner Johansson was in Spain last week, meeting with the Spanish authorities ahead of the incoming Council Presidency. The fight against child sexual abuse remains high on the Council’s agenda. The Commission will continue to work closely with co-legislators on this proposal.

This report was updated with comment from the Commission

Europe’s CSAM scanning plan unpicked

Secure messaging apps line up to warn UK’s Online Safety Bill risks web security

More TechCrunch

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

2 hours ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

2 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

2 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more