Clearview AI, the controversial facial recognition firm that scrapes selfies and other personal data off the Internet without consent to feed an AI-powered identity-matching service it sells to law enforcement and others, has been hit with another fine in Europe.
This one comes after it failed to respond to an order last year from the CNIL, France’s privacy watchdog, to stop its unlawful processing of French citizens’ information and delete their data.
Clearview responded to that order by, well, ghosting the regulator — thereby adding a third GDPR breach (non-cooperation with the regulator) to its earlier tally.
Here’s the CNIL’s summary of Clearview’s breaches:
- Unlawful processing of personal data (breach of Article 6 of the GDPR)
- Individuals’ rights not respected (Articles 12, 15 and 17 of the GDPR)
- Lack of cooperation with the CNIL (Article 31 of the RGPD)
“Clearview AI had two months to comply with the injunctions formulated in the formal notice and to justify them to the CNIL. However, it did not provide any response to this formal notice,” the CNIL wrote in a press release today announcing the sanction [emphasis its].
“The chair of the CNIL therefore decided to refer the matter to the restricted committee, which is in charge for issuing sanctions. On the basis of the information brought to its attention, the restricted committee decided to impose a maximum financial penalty of 20 million euros, according to article 83 of the GDPR [General Data Protection Regulation].”
The EU’s GDPR allows for penalties of up to 4% of a firm’s worldwide annual revenue for the most serious infringements — or €20 million, whichever is higher. But the CNIL’s press release makes clear it’s imposing the maximum amount it possibly can here.
Whether France will see a penny of this money from Clearview remains an open question, however.
The U.S.-based privacy-stripper has been issued with a slew of penalties by other data protection agencies across Europe in recent months, including €20M fines from Italy and Greece; and a smaller U.K. penalty. But it’s not clear it’s handed over any money to any of these authorities — and they have limited resources (and legal means) to try to pursue Clearview for payment outside their own borders.
So the GDPR penalties look mostly like a warning to stay away from Europe.
Clearview’s PR agency, LakPR Group, sent us this statement following the CNIL’s sanction — which it attributed to CEO Hoan Ton-That:
There is no way to determine if a person has French citizenship, purely from a public photo from the internet, and therefore it is impossible to delete data from French residents. Clearview AI only collects publicly available information from the internet, just like any other search engine like Google, Bing or DuckDuckGo.
The statement goes on to reiterate earlier claims by Clearview that it does not have a place of business in France or in the EU, nor undertake any activities that would “otherwise mean it is subject to the GDPR”, as it puts it — adding: “Clearview AI’s database of publicly available images is lawfully collected, just like any other search engine like Google.”
(NB: On paper the GDPR has extraterritorial reach so its former arguments are meaningless, while its claim it’s not doing anything that would make it subject to the GDPR looks absurd given its amassed a database of over 20 billion images worldwide and Europe is, er, part of Planet Earth… )
Ton-That’s statement also repeats a much-trotted out claim in Clearview’s public statements responding to the flow of regulatory sanctions its business attracts that it created its facial recognition tech with “the purpose of helping to make communities safer and assisting law enforcement in solving heinous crimes against children, seniors and other victims of unscrupulous acts” — not to cash in by unlawfully exploiting people’s privacy — not that, in any case, having a ‘pure’ motive would make any difference to its requirement, under European law, to have a valid legal basis to process people’s data in the first place.
“We only collect public data from the open internet and comply with all standards of privacy and law. I am heartbroken by the misinterpretation by some in France, where we do no business, of Clearview AI’s technology to society. My intentions and those of my company have always been to help communities and their people to live better, safer lives,” concludes Clearview’s PR.
Each time it has received a sanction from an international regulator it’s done the same thing: Denying it has committed any breach and refuted the foreign body has any jurisdiction over its business — so its strategy for dealing with its own data processing lawlessness appears to be simple non-cooperation with regulators outside the US.
Obviously this only works if you plan for your execs/senior personnel to never set foot in the territories where your business is under sanction and abandon any notion of selling the sanctioned service to overseas customers. (Last year Sweden’s data protection watchdog also fined a local police authority for unlawful use of Clearview — so European regulators can act to clamp down on any local demand too, if required.)
On home turf, Clearview has finally had to face up to some legal red lines recently.
Earlier this year it agreed to settle a lawsuit that had accused it of running afoul of an Illinois law banning the use of individuals’ biometric data without consent. The settlement included Clearview agreeing to some limits on its ability to sell its software to most U.S. companies but it still trumpeted the outcome as a “huge win” — claiming it would be able to circumvent the ruling by selling its algorithm (rather than access to its database) — to private companies in the U.S.
The need to empower regulators so they can order the deletion (or market withdrawal) of algorithms trained on unlawfully processed data does look like an important upgrade to their toolboxes if we’re to avoid an AI-fuelled dystopia.
And it just so happens that the EU’s incoming AI Act may contain such a power, per legal analysis of the proposed framework.
The bloc has also more recently presented a plan for an AI Liability Directive which it wants to encourage compliance with the broader AI Act — by linking compliance to a reduced risk that AI model makers, deployers, users etc can be successfully sued if their products case a range of harms, including to people’s privacy.