Privacy

ChatGPT is violating Europe’s privacy laws, Italian DPA tells OpenAI

Comment

OpenAI logo is being displayed on a mobile phone screen in front of computer screen with the logo of ChatGPT
Image Credits: Didem Mente/Anadolu Agency / Getty Images

OpenAI has been told it’s suspected of violating European Union privacy, following a multi-month investigation of its AI chatbot, ChatGPT, by Italy’s data protection authority.

Details of the Italian authority’s draft findings haven’t been disclosed. But the Garante said today OpenAI has been notification and given 30 days to respond with a defence against the allegations.

Confirmed breaches of the pan-EU regime can attract fines of up to €20 million, or up to 4% of global annual turnover. More uncomfortably for an AI giant like OpenAI, data protection authorities (DPAs) can issue orders that require changes to how data is processed in order to bring an end to confirmed violations. So it could be forced to change how it operates. Or pull its service out of EU Member States where privacy authorities seek to impose changes it doesn’t like.

OpenAI was contacted for a response to the Garante’s notification of violation. We’ll update this report if they send a statement.

Update: OpenAI said:

We believe our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy. We want our AI to learn about the world, not about private individuals. We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people. We plan to continue to work constructively with the Garante.

AI model training lawfulness in the frame

The Italian authority raised concerns about OpenAI’s compliance with the bloc’s General Data Protection Regulation (GDPR) last year — when it ordered a temporary ban on ChatGPT’s local data processing which led to the AI chatbot being temporarily suspended in the market.

The Garante’s March 30 provision to OpenAI, aka a “register of measures”, highlighted both the lack of a suitable legal basis for the collection and processing of personal data for the purpose of training the algorithms underlying ChatGPT; and the tendency of the AI tool to ‘hallucinate'(i.e. its potential to produce inaccurate information about individuals) — as among its issues of concern at that point. It also flagged child safety as a problem.

In all, the authority said that it suspected ChatGPT to be breaching Articles 5, 6, 8, 13 and 25 of the GDPR.

Despite identifying this laundry list of suspected violations, OpenAI was able to resume service of ChatGPT in Italy relatively quickly last year, after taking steps to address some issues raised by the DPA. However the Italian authority said it would continue to investigate the suspected violations. It’s now arrived at preliminary conclusions the tool is breaking EU law.

While the Italian authority hasn’t yet said which of the previously suspected ChatGPT breaches it’s confirmed at this stage, the legal basis OpenAI claims for processing personal data to train its AI models looks like a particularly crux issue.

This is because ChatGPT was developed using masses of data scraped off the public Internet — information which includes the personal data of individuals. And the problem OpenAI faces in the European Union is that processing EU people’s data requires it to have a valid legal basis.

The GDPR lists six possible legal bases — most of which are just not relevant in its context. Last April, OpenAI was told by the Garante to remove references to “performance of a contract” for ChatGPT model training — leaving it with just two possibilities: Consent or legitimate interests.

Given the AI giant has never sought to obtain the consent of the countless millions (or even billions) of web users’ whose information it has ingested and processed for AI model building, any attempt to claim it had Europeans’ permission for the processing would seem doomed to fail. And when OpenAI revised its documentation after the Garante’s intervention last year it appeared to be seeking to rely on a claim of legitimate interest. However this legal basis still requires a data processor to allow data subjects to raise an objection — and have processing of their info stop.

How OpenAI could do this in the context of its AI chatbot is an open question. (It might, in theory, require it to withdraw and destroy illegally trained models and retrain new models without the objecting individual’s data in the training pool — but, assuming it could even identify all the unlawfully processed data on a per individual basis, it would need to do that for the data of each and every objecting EU person who told it to stop… Which, er, sounds expensive.)

Beyond that thorny issue, there is the wider question of whether the Garante will finally conclude legitimate interests is even a valid legal basis in this context.

Frankly, that looks unlikely. Because LI is not a free-for-all. It requires data processors to balance their own interests against the rights and freedoms of individuals whose data is being processed — and to consider things like whether individuals would have expected this use of their data; and the potential for it to cause them unjustified harm. (If they would not have expected it and there are risks of such harm LI will not be found to be a valid legal basis.)

The processing must also be necessary, with no other, less intrusive way for the data processor to achieve their end.

Notably, the EU’s top court has previously found legitimate interests to be an inappropriate basis for Meta to carry out tracking and profiling of individuals to run its behavioral advertising business on its social networks. So there is a big question mark over the notion of another type of AI giant seeking to justify processing people’s data at vast scale to build a commercial generative AI business — especially when the tools in question generate all sorts of novel risks for named individuals (from disinformation and defamation to identity theft and fraud, to name a few).

A spokesperson for the Garante confirmed that the legal basis for processing people’s data for model training remains in the mix of what it’s suspected ChatGPT of violating. But they did not confirm exactly which one (or more) article(s) it suspects OpenAI of breaching at this point.

The authority’s announcement today is also not yet the final word — as it will also wait to receive OpenAI’s response before taking a final decision.

Here’s the Garante’s statement (which we’ve translated from Italian using AI):

[Italian Data Protection Authority] has notified OpenAI, the company that runs the ChatGPT artificial intelligence platform, of its notice of objection for violating data protection regulations.

Following the provisional restriction of processing order, adopted by the Garante against the company on March 30, and at the outcome of the preliminary investigation carried out, the Authority considered that the elements acquired may constitute one or more unlawful acts with respect to the provisions of the EU Regulation.

OpenAI, will have 30 days to communicate its defence briefs on the alleged violations.

In defining the proceedings, the Garante will take into account the ongoing work of the special task force set up by the Board that brings together the EU Data Protection Authorities (EDPB).

OpenAI is also facing scrutiny over ChatGPT’s GDPR compliance in Poland, following a complaint last summer which focuses on an instance of the tool producing inaccurate information about a person and OpenAI’s response to that complainant. That separate GDPR probe remains ongoing.

OpenAI, meanwhile, has responded to rising regulatory risk across the EU by seeking to establish a physical base in Ireland; and announcing, in January, that this Irish entity would be the service provider for EU users’ data going forward.

Its hopes with these moves will be to gain so-called “main establishment” status in Ireland and switch to having assessment of its GDPR compliance led by Ireland’s Data Protection Commission, via the regulation’s one-stop-shop mechanism — rather than (as now) its business being potentially subject to DPA oversight from anywhere in the Union that its tools have local users.

However OpenAI has yet to obtain this status so ChatGPT could still face other probes by DPAs elsewhere in the EU. And, even if it gets the status, the Italian probe and enforcement will continue as the data processing in question predates the change to its processing structure.

The bloc’s data protection authorities have sought to coordinate on their oversight of ChatGPT by setting up a taskforce to consider how the GDPR applies to the chatbot, via the European Data Protection Board, as the Garante’s statement notes. That (ongoing) effort may, ultimately, produce more harmonized outcomes across discrete ChatGPT GDPR investigations — such as those in Italy and Poland.

However authorities remain independent and competent to issue decisions in their own markets. So, equally, there are no guarantees any of the current ChatGPT probes will arrive at the same conclusions.

ChatGPT resumes service in Italy after adding privacy disclosures and controls

Italy gives OpenAI initial to-do list for lifting ChatGPT suspension order

 

More TechCrunch

Copilot, Microsoft’s brand of generative AI, will soon be far more deeply integrated into the Windows 11 experience.

Microsoft Build 2024: All the AI and hardware products Microsoft announced

Hello and welcome back to TechCrunch Space. For those who haven’t heard, the first crewed launch of Boeing’s Starliner capsule has been pushed back yet again to no earlier than…

TechCrunch Space: Star(side)liner

When I attended Automate in Chicago a few weeks back, multiple people thanked me for TechCrunch’s semi-regular robotics job report. It’s always edifying to get that feedback in person. While…

These 81 robotics companies are hiring

The top vehicle safety regulator in the U.S. has launched a formal probe into an April crash involving the all-electric VinFast VF8 SUV that claimed the lives of a family…

VinFast crash that killed family of four now under federal investigation

When putting a video portal in a public park in the middle of New York City, some inappropriate behavior will likely occur. The Portal, the vision of Lithuanian artist and…

NYC-Dublin real-time video portal reopens with some fixes to prevent inappropriate behavior

Longtime New York-based seed investor, Contour Venture Partners, is making progress on its latest flagship fund after lowering its target. The firm closed on $42 million, raised from 64 backers,…

Contour Venture Partners, an early investor in Datadog and Movable Ink, lowers the target for its fifth fund

Meta’s Oversight Board has now extended its scope to include the company’s newest platform, Instagram Threads, and has begun hearing cases from Threads.

Meta’s Oversight Board takes its first Threads case

The company says it’s refocusing and prioritizing fewer initiatives that will have the biggest impact on customers and add value to the business.

SeekOut, a recruiting startup last valued at $1.2 billion, lays off 30% of its workforce

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender SoLo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says