Controversial US facial recognition company, Clearview AI, has won an appeal against a privacy sanction issued by the U.K. last year.
In May 2022, the Information Commissioner’s Office (ICO) issued a formal enforcement notice on Clearview — which included a fine of around £7.5 million (~$10 million) — after concluding the self-scraping AI firm had committed a string of breaches of local privacy laws. It also ordered the company, which uses the scraped personal data to sell an identity-matching service to law enforcement and national security bodies, to delete information it held on U.K. citizens.
Clearview filed an appeal against the decision. And in a ruling issued yesterday its legal challenge to the ICO prevailed on jurisdiction grounds after the tribunal ruled the company’s activities fall outside the jurisdiction of U.K. data protection law owing to an exemption related to foreign law enforcement.
Although the tribunal did agree with the ICO’s argument that Clearview’s processing was related to the monitoring of data subjects’ behavior carried out by its clients. It also found the company to be a joint controller for the processing. But the ICO’s case came unstuck on legal jurisdiction.
The U.K.’s General Data Protection Regulation (GDPR) stipulates that the processing of personal data by competent authorities for law enforcement purposes is outside its scope — and is instead subject to rules in Part 3 of the Data Protection Act 2018 (which brought the EU Law Enforcement Directive EU2016/680 into U.K. law, post-Brexit).
Per the ruling, Clearview argued it’s a foreign company providing its service to “foreign clients, using foreign IP addresses, and in support of the public interest activities of foreign governments and government agencies, in particular in relation to their national security and criminal law enforcement functions”.
The tribunal accepted its claim to provide service exclusively to non-U.K./EU law enforcement or national security bodies and their contractors (and that all such contractors also only carry out criminal law enforcement and/or national security functions) — overturning the ICO’s enforcement decision finding a string of breaches of the U.K. GDPR.
Contacted for a response to the ruling, an ICO spokesperson emailed us this statement:
The ICO will take stock of today’s judgment and carefully consider next steps. It is important to note that this judgment does not remove the ICO’s ability to act against companies based internationally who process data of people in the U.K, particularly businesses scraping data of people in the UK, and instead covers a specific exemption around foreign law enforcement.
The data protection watchdog did not confirm whether or not it will appeal — but told us it has 28 days to decide.
It’s not clear why the ICO did not bring a claim against the Clearview under the DPA 2018, rather than the U.K. GDPR. (The ICO declined to comment on that.)
Clearview, meanwhile, welcomed the tribunal ruling. “We are pleased with the tribunal’s decision to reverse the U.K. ICO’s unlawful order against Clearview AI,” said general counsel, Jack Mulcaire, in a brief response statement.
The U.K. sanction was just one of a number of enforcements that have been brought against Clearview in recent years under regional data protection laws.
Data protection authorities in France, Italy and Greece have found the US firm in breach of the EU’s GDPR — which the U.K.’s domestic data protection framework is based on. However, since Brexit, the U.K. GDPR is distinct law — so it’s not clear whether this tribunal ruling will have direct implications for other enforcements against Clearview which make reference to the EU’s GDPR.
Nonetheless, DPAs in the bloc have also struggled to enforce their will on Clearview.
Back in May, France’s CNIL confirmed Clearview had not paid the penalties it had levied — and announced a further fine for non-payment at that point. The authority had also ordered Clearview to delete data on French citizens and banned further unlawful processing. But it’s not clear the CNIL has been able to enforce those injunctions either.
Earlier this year the French authority told TechCrunch it was talking to the US Federal Trade Commission — “to discuss how we can ensure that the injunction issued against the company is enforced”.
Contacted for an update on its efforts to make Clearview comply with its orders, a CNIL spokesperson confirmed the company still has not paid the penalties ordered. They also told us it has not appealed the regulatory sanction either. “Yes, we could describe them as non-cooperative,” they added.
We’ve also reached out to the Italian and Greek DPAs with questions about their own procedures against it and will update this report with any responses.
The Clearview case highlights the challenges for European regulators of trying to enforce data protection rules, which — in the case of the GDPR at least — do apply extraterritorially, i.e. against foreign-located firms processing local people’s data. But Clearview’s pivot to fully focusing its business on law enforcement and national security agencies appears to have complicated the legal picture.
The company claims it does not have any local customers, saying it does not provide its service to users located in the U.K. or EU. But that was not always the case. Back in 2021, Sweden’s data protection authority targeted a previous Clearview customer for enforcement — fining the Swedish Police Authority €250,000 ($300,000+) for unlawful use of its AI tech which it found was in breach of the country’s Criminal Data Act.
That investigation was specific to the local police authority’s use of Clearview’s tool — with the Swedish authority finding it had not fulfilled its legal obligations as a data controller, including by failing to implement sufficient organisational measures to demonstrate the processing was compliant with the law (such as not conducting a data protection impact assessment). But it underlines that law enforcement authorities operating in the EU don’t have carte blanche to use Clearview.
Indeed, the opposite may be true; it may be that local law enforcement cannot — lawfully — make use of a tool that triggers so many fundamental rights concerns.
The European Data Protection Board (EDPB) and the European Data Protection Supervisor have previously called for a ban on the processing of personal data in a law enforcement context “that would rely on a database populated by collection of personal data on a mass-scale and in an indiscriminate way”, as the EDPB put it last year — explicitly giving the example of scraping photographs and facial pictures from the public internet (as Clearview does).
The EDPB also published detailed guidance on the use of facial recognition in law enforcement that cautions authorities they can’t ignore data protection rules and principles — and must make careful assessments of “necessity and proportionality”, when considering adopting AI tools; as well as examining “all possible implications for other fundamental rights”.
So the bloc’s data protection framework does make it very difficult — or even impossible — for Clearview to sell its privacy-hostile services to regional law enforcement clients. Even as the GDPR puts limits on its ability to sell services to regional customers for any other purposes.
Over the pond, meanwhile, recent US litigation against Clearview by the ACLU, under an Illinois law banning the use of individuals’ biometric data without consent, ended in a settlement last year that included a national ban on the company selling or giving away access to its facial recognition database to private companies and individuals — essentially limiting its business to US government contracts (except for state or local government entities in Illinois itself, which were covered by the ban).
So, for European regulators, the question is whether they can do anything much to stop a US company hoovering up data on their citizens and selling privacy-hostile facial-matching to US law enforcement or other foreign authorities and state agencies?
Under current laws and enforcement powers that looks tricky.
The controversy around Clearview has landed on the radar of EU lawmakers who are working on establishing a risk-based framework for regulating applications of artificial intelligence. And, earlier this year, MEPs in the European Parliament backed amendments to the draft EU AI Act that proposed expanding a list of prohibited AI practices to include what amounts to a Clearview clause. This amendment would explicitly ban indiscriminate scraping of biometric data from social media sites (and elsewhere) to create facial recognition databases — an action MEPs affirmed as violating human rights, including the right to privacy.
The bloc’s co-legislators are still working on the AI Act file. So it remains to be seen whether the proposed prohibition on scraping selfies to power facial recognition-based ID matching will make it into the final text. If it does, it would clearly further harden regional law against Clearview.
But, once again, whether a fresh network of regional regulators, tasked with enforcing the AI Act, will have any more success at forcing an uncooperative foreign firm to stop abusing Europeans’ rights remains to be seen.
This report was updated with responses from CNIL and further details from the ICO