AI

California’s privacy watchdog eyes AI rules with opt-out and access rights

Comment

Holographic human type AI robot and programming data on a black background.
Image Credits: Yuichiro Chino / Getty Images

California’s Privacy Protection Agency (CPPA) is preparing for its next trick: Putting guardrails on AI.

The state privacy regulator, which has an important role in setting rules of the road for digital giants given how much of Big Tech (and Big AI) is headquartered on its sun-kissed soil, has today published draft regulations for how people’s data can be used for what it refers to as automated decisionmaking technology (ADMT*). Aka AI.

The draft represents “by far the most comprehensive and detailed set of rules in the ‘AI space’”, Ashkan Soltani, the CPPA’s exec director, told TechCrunch. The approach takes inspiration from existing rules in the European Union, where the bloc’s General Data Protection Regulation (GDPR) has given individuals rights over automated decisions with a legal or significant impact on them since coming into force back in May 2018 — but aims to build on it with more specific provisions that may be harder for tech giants to wiggle away from.

The core of the planned regime — which the Agency intends to work on finalizing next year, after a consultation process — includes opt-out rights, pre-use notice requirements and access rights which would enable state residents to obtain meaningful information on how their data is being used for automation and AI tech.

AI-based profiling could even fall in scope of the planned rules, per the draft the CPPA has presented today. So — assuming this provision survives the consultation process and makes it into the hard-baked rules — there could be big implications for US adtech giants like Meta which has a business model that hinges on tracking and profiling users to target them with ads.

Such firms could be required to offer California residents the ability to deny their commercial surveillance, with the proposed law stating businesses must provide consumers with the ability to opt-out of their data being processed for behavioral advertising. The current draft further stipulates that behavioral advertising use-cases cannot make use of a number of exemptions to the opt-out right that may apply in other scenarios (such as if ADMT is being used for security or fraud prevention purposes, for example).

The CPPA’s approach to regulating ADMT is risk-based, per Soltani. This echoes another piece of in-train EU legislation: the AI Act — a dedicated risk-based framework for regulating applications of artificial intelligence which has been on the table in draft form since 2021 but is now at a delicate stage of co-legislation, with the bloc’s lawmakers clashing over the not-so-tiny-detail of how (or even whether) to regulate Big AI, among several other policy disputes on the file.

Given the discord around the EU’s AI Act, as well as the ongoing failure of US lawmakers to pass a comprehensive federal privacy law — since there’s only so much presidential Executive Orders can do — there’s a plausible prospect of California ending up as one of the top global rulemakers on AI.

That said, the impact of California’s AI rules is likely to remain local, given its focus on affording protections and controls to state residents. In-scope companies might choose to go further — such as, say, offering the same package of privacy protections to residents of other US states. But that’s up to them. And, bottom line, the CPPA’s reach and enforcement is tied to the California border.

Its bid to tackle AI follows the introduction of GDPR-inspired privacy rules, back in 2019, with the California Consumer Privacy Act (CCPA) coming into effect in early 2020. Since then the Agency has been pushing to go further. And, in fall 2020, a ballot measure secured backing from state residents to reinforce and redefine parts of the privacy law. The new measures laid out in draft today to address ADM are part of that effort.

“The proposed regulations would implement consumers’ right to opt out of, and access information about, businesses’ uses of ADMT, as provided for by the [CCPA],” the CPPA wrote in a press release. “The Agency Board will provide feedback on these proposed regulations at the December 8, 2023, board meeting, and the Agency expects to begin formal rulemaking next year.”

In parallel, the regulator is considering draft risk assessment requirements which are intended to work in tandem with the planned ADMT rules. “Together, these proposed frameworks can provide consumers with control over their personal information while ensuring that automated decisionmaking technologies, including those made from artificial intelligence, are used with privacy in mind and in design,” it suggests.

Commenting in a statement, Vinhcent Le, member of the regulator’s board and of the New Rules Subcommittee that drafted the proposed regulations, added: “Once again, California is taking the lead to support privacy-protective innovation in the use of emerging technologies, including those that leverage artificial intelligence. These draft regulations support the responsible use of automated decisionmaking while providing appropriate guardrails with respect to privacy, including employees’ and children’s privacy.”

What’s being proposed by the CPPA?

The planned regulations deal with access and opt-out rights in relation to businesses’ use of ADMT.

Per an overview of the draft regulation, the aim is to establish a regime that will let state residents request an opt-out from their data being used for automated decisionmaking — with a relatively narrow set of exemptions planned where use of the data is necessary (and solely intended) for either: Security purposes (“to prevent, detect, and investigate security incidents”); fraud prevention; safety (“to protect the life and physical safety of consumers”); or for a good or service requested by the consumer.

The latter comes with a string of caveats, including that the business “has no reasonable alternative method of processing”; and must demonstrate “(1) the futility of developing or using an alternative method of processing; (2) an alternative method of processing would result in a good or service that is not as valid, reliable, and fair; or (3) the development of an alternative method of processing would impose extreme hardship upon the business”.

So — tl;dr — a business that intends to use ADMT and is trying to use a (crude) argument that, simply because the product contains automation/AI users can’t opt-out of their data being processed/fed to the models, looks unlikely to wash. At least not without the company going to extra effort to stand up a claim that, for instance, less intrusive processing would not suffice for their use-case.

Basically, then, the aim is for there to be a compliance cost attached to trying to deny consumers the ability to opt-out of automation/AI being applied to their data.

Of course a law that lets consumers opt-out of privacy-hostile data processing is only going to work if the people involved are aware how their information is being used. Hence the planned framework also sets out a requirement that businesses wanting to apply ADMT must provide so-called “pre-use notices” to affected consumers — so they can decide whether to opt-out of their data being used (or not); or indeed whether to exercise their access right to get more info about the intended use of automation/AI.

This too looks broadly similar to provisions in the EU’s GDPR which put transparency (and fairness) obligations on entities processing personal data — in addition to requiring a valid lawful basis for them to use personal data.

Although the European regulation contains some exceptions — such as where info was not directly collected from individuals and fulfilling their right to be informed would be “unreasonably expensive” or “impossible” — which may have undermined EU lawmakers’ intent that data subjects should be kept informed. (Perhaps especially in the realm of AI — and generative AI — where large amounts of personal data have clearly been scraped off the Internet but web users have not been proactively informed about this heist of their info; see, for example, regulatory action against Clearview AI. Or the open investigations of OpenAI’s ChatGPT.)

The proposed Californian framework also includes GDPR-esque access rights which will allow state residents to ask a business to provide them with: Details of their use of ADMT; the technology’s output with respect to them; how decisions were made (including details of any human involvement; and whether the use of ADMT was evaluated for “validity, reliability and fairness”); details of the logic of the ADMT, including “key parameters” affecting the output; and how they applied to the individual; information on the range of possible outputs; and info on how the consumer can exercise their other CCPA rights and submit a complaint about the use of ADMT.

Again, the GDPR provides a broadly similar right — stipulating that data subjects must be provided with “meaningful information about the logic involved” in automated decisions that have a significant/legal effect on them. But it’s still falling to European courts to interpret where the line lies when it comes to how much (or how specific the) information algorithmic platforms must hand over in response to these GDPR subject access requests (see, for example, litigation against Uber in the Netherlands where a number of drivers have been trying to get details of systems involved in flagging accounts for potential fraud).

The CCPA looks to be trying to pre-empt attempts by ADMT companies to evade the transparency intent of providing consumers with access rights — by setting out, in greater detail, what information they must provide in response to these requests. And while the draft framework does include some exemptions to access rights, just three are proposed: Security, fraud prevention and safety — so, again, this looks like an attempt to limit excuses and (consequently) expand algorithmic accountability.

Not every use of ADMT will be in-scope of the CCPA’s proposed rules. The draft regulation proposes to set a threshold as follows:

  1. For a decision that produces legal or similarly significant effects concerning a consumer (e.g., decisions to provide or deny employment opportunities).
  2. Profiling a consumer who is acting in their capacity as an employee, independent contractor, job applicant, or student.
  3. Profiling a consumer while they are in a publicly accessible place.

The Agency also says the upcoming consultation will discuss whether the rules should also apply to: profiling a consumer for behavioral advertising; profiling a consumer the business has “actual knowledge is under the age of 16” (i.e. profiling children); and processing the personal information of consumers to train ADMT — indicating it’s not yet confirmed how much of the planned regime will apply to (and potentially limit the modus operandi of) adtech and data-scraping generative AI giants.

The more expansive list of proposed thresholds would clearly make the law bite down harder on adtech giants and Big AI. But, it being California, the CCPA can probably expect a lot of pushback from local giants like Meta and OpenAI, to name two.

The draft proposal marks the start of the CPPA’s rulemaking process, with the aforementioned consultation process — which will include a public component — set to kick off in the coming weeks. So it’s still a ways off a final text. A spokeswoman for the CPPA said it’s unable to comment on a possible timeline for the rulemaking but she noted this is something that will be discussed at the upcoming board meeting, on December 8.

If the Agency is able to move quickly it’s possible it could have a regulation finalized in the second half of next year. Although there would obviously need to be a grace period before compliance kicks in for in-scope companies — so 2025 looks like the very earliest for a law to be up and running. And who knows how far developments in AI will have moved on by then.

* The CPPA’s proposed definition for ADMT in the draft framework is “any system, software, or process — including one derived from machine-learning, statistics, other data-processing or artificial intelligence — that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking”. Its definition also affirms “ADMT includes profiling” — which is defined as “any form of automated processing of personal information to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements”

Europe’s AI Act talks head for crunch point

President Biden issues executive order to set standards for AI safety and security

More TechCrunch

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment copies BeReal and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

5 hours ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

7 hours ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android

A hacker listed the data allegedly breached from Samco on a known cybercrime forum.

Hacker claims theft of India’s Samco account data