Snap’s AI chatbot draws scrutiny in UK over kids’ privacy concerns

Snap’s AI chatbot has landed the company on the radar of the U.K.’s data protection watchdog which has raised concerns the tool may be a risk to children’s privacy.

The Information Commissioner’s Office (ICO) announced today that it’s issued a preliminary enforcement notice on Snap over what it described as “potential failure to properly assess the privacy risks posed by its generative AI chatbot ‘My AI’”.

The ICO action is not a breach finding. But the notice indicates the U.K. regulator has concerns that Snap may not have taken steps to ensure the product complies with data protection rules, which — since 2021 — have been dialled up to include the Children’s Design Code.

“The ICO’s investigation provisionally found the risk assessment Snap conducted before it launched ‘My AI’ did not adequately assess the data protection risks posed by the generative AI technology, particularly to children,” the regulator wrote in a press release. “The assessment of data protection risk is particularly important in this context which involves the use of innovative technology and the processing of personal data of 13 to 17 year old children.”

Snap will now have a chance to respond to the regulator’s concerns before the ICO takes a final decision on whether the company has broken the rules.

“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI’,” added information commissioner, John Edwards, in a statement. “We have been clear that organisations must consider the risks associated with AI, alongside the benefits. Today’s preliminary enforcement notice shows we will take action in order to protect UK consumers’ privacy rights.”

Snap launched the generative AI chatbot back in February — though it didn’t arrive in the U.K. until April — leveraging OpenAI’s ChatGPT large language model (LLM) technology to power a bot that was pinned to the top of users’ feed to act as a virtual friend that could be asked advice or sent snaps.

Initially the feature was only available to subscribers of Snapchat+, a premium version of the ephemeral messaging platform. But pretty quickly Snap opened access of “My AI” to free users too — also adding the ability for the AI to send snaps back to users who interacted with it (these snaps are created with generative AI).

The company has said the chatbot has been developed with additional moderation and safeguarding features, including age consideration as a default — with the aim of ensuring generated content is appropriate for the user. The bot is also programmed to avoid responses that are violent, hateful, sexually explicit, or otherwise offensive. Additionally, Snap’s parental safeguarding tools let parents know whether their kid has been communicating with the bot in the past seven days — via its Family Center feature.

But despite the claimed guardrails there have been reports of the bot going off the rails. In an early assessment back in March, The Washington Post reported the chatbot had recommended ways to mask the smell of alcohol after it was told that the user was 15. In another case when it was told the user was 13 and asked how they should prepare to have sex for the first time, the bot responded with suggestions for “making it special” by setting the mood with candles and music.

Snapchat users have also been reported bullying the bot — with some also frustrated an AI has been injected into their feeds in the first place.

Reached for comment on the ICO notice, a Snap spokesperson told TechCrunch:

We are closely reviewing the ICO’s provisional decision. Like the ICO we are committed to protecting the privacy of our users. In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available. We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.

It’s not the first time an AI chatbot has landed on the radar of European privacy regulators. In February Italy’s Garante ordered the San Francisco-based maker of “virtual friendship service” Replika with an order to stop processing local users’ data — also citing concerns about risks to minors.

The Italian authority also put a similar stop-processing-order on OpenAI’s ChatGPT tool the following month. The block was then lifted in April but only after OpenAI had added more detailed privacy disclosures and some new user controls — including letting users ask for their data not to be used to train its AIs and/or to be deleted.

The regional launch of Google’s Bard chatbot was also delayed after concerns were raised by its lead regional privacy regulator, Ireland’s Data Protection Commission. It subsequently launched in the EU in July, also after adding more disclosures and controls — but a regulatory taskforce set up within the European Data Protection Board remains focused on assessing how to enforce the bloc’s General Data Protection Regulation (GDPR) on generative AI chatbots, including ChatGPT and Bard.

Poland’s data protection authority also confirmed last month that it’s investigating a complaint against ChatGPT.

Discussing how privacy and data protection regulators are approaching generative AI, Dr Gabriela Zanfir-Fortuna, VP for global privacy at the Washington-based thinktank, the Future of Privacy Forum (FPF), pointed to a statement adopted by G7 DPAs this summer — which includes watchdogs in France, Germany, Italy and the U.K. — in which they listed key areas of concern, such as these tools’ legal basis for processing personal data, including minors’ data.

“Developers and providers should embed privacy in the design, conception, operation, and management of new products and services that use generative AI technologies, based on the concept of ‘Privacy by Design’ and document their choices and analyses in a privacy impact assessment,” the G7 DPAs also affirmed.

Earlier this year the U.K.’s ICO also put out guidelines for developers seeking to apply generative AI — listing eight questions it suggested they should be asking when building products such as AI chatbots.

Speaking at the G7 symposium in July, Edwards reiterated the need for developers to pay attention. In remarks picked up by the FPF he said commissioners are “keen to ensure” they “do not miss this essential moment in the development of this new technology in a way that [they] missed the moment of building the business models underpinning social media and online advertising” — with the U.K.’s information commissioner also warning: “We are here and watching.”

So while Zanfir-Fortuna suggests it’s not too unusual to see the U.K. authority issuing a public preliminary enforcement notice, as it is here on Snap, she agreed regulators are being perhaps more public than usual about their actions vis-a-vis generative AI — turning their attentiveness into a public warning, even as they consider how best to enforce existing privacy rules on LLMs.

“All regulators have been acting quite cautiously, but always public, and they seem to want to persuade companies to be more cautious and to bring data protection on the top of their priorities when building these tools and making them available to the public,” she told TechCrunch. “A common thread in existing regulatory action is that we are seeing preliminary decisions, deadlines given to companies to bring their processing in compliance, letters of warning, press releases that investigations are open, rather than actual enforcement decisions.”

This report was updated with additional comment