AI

EU lawmakers eye tiered approach to regulating generative AI

Comment

Colorful streams of data flowing into colorful binary info.
Image Credits: NicoElNino / Getty Images

EU lawmakers in the European parliament are closing in on how to tackle generative AI as they work to fix their negotiating position so that the next stage of legislative talks can kick off in the coming months.

The hope then is that a final consensus on the bloc’s draft law for regulating AI can be reached by the end of the year.

“This is the last thing still standing in the negotiation,” says MEP Dragos Tudorache, the co-rapporteur for the EU’s AI Act, discussing MEPs’ talks around generative AI in an interview with TechCrunch. “As we speak, we are crossing the last ‘T’s and dotting the last ‘I’s. And sometime next week I’m hoping that we will actually close — which means that sometime in May we will vote.”

The Council adopted its position on the regulation back in December. But where Member States largely favored deferring what to do about generative AI — to additional, implementing legislation — MEPs look set to propose that hard requirements are added to the Act itself.

In recent months, tech giants’ lobbyists have been pushing in the opposite direction, of course, with companies such as Google and Microsoft arguing for generative AI to get a regulatory carve out of the incoming EU AI rules.

Where things will end up remains tbc. But discussing what’s likely to be the parliament’s position in relation to generative AI tech in the Act, Tudorache suggests MEPs are gravitating towards a layered approach — three layers in fact — one to address responsibilities across the AI value chain; another to ensure foundational models get some guardrails; and a third to tackle specific content issues attached to generative models, such as the likes of OpenAI’s ChatGPT.

Report details how Big Tech is leaning on EU not to regulate general purpose AIs

Under the MEPs’ current thinking, one of these three layers would apply to all general purpose AI (GPAIs) — whether big or small; foundational or non foundational models — and be focused on regulating relationships in the AI value chain.

“We think that there needs to be a level of rules that says ‘entity A’ puts on the market a general purpose [AI] has an obligation towards ‘entity B’, downstream, that buys the general purpose [AI] and actually gives it a purpose,” he explains. “Because it gives it a purpose that might become high risk it needs certain information. In order to comply [with the AI Act] it needs to explain how the model was was trained. The accuracy of the data sets from biases [etc].”

A second proposed layer would address foundational models — by setting some specific obligations for makers of these base models.

“Given their power, given the way they are trained, given the versatility, we believe the providers of these foundational models need to do certain things — both ex ante… but also during the lifetime of the model,” he says. “And it has to do with transparency, it has to do, again, with how they train, how they test prior to going on the market. So basically, what is the level of diligence the responsibility that they have as developers of these models?”

The third layer MEPs are proposing would target generative AIs specifically — meaning a subset of GPAIs/foundational models, such as large language models or generative art and music AIs. Here lawmakers working to set the parliament’s mandate are taking the view these tools need even more specific responsibilities; both when it comes to the type of content they can produce (with early risks arising around disinformation and defamation); and in relation to the thorny (and increasingly litigated) issue of copyrighted material used to train AIs.

“We’re not inventing a new regime for copyright because there is already copyright law out there. What we are saying… is there has to be a documentation and transparency about material that was used by the developer in the training of the model,” he emphasizes. “So that afterwards the holders of those rights… can say hey, hold on, what you used my data, you use my songs, you used my scientific article — well, thank you very much that was protected by law, therefore, you owe me something — or no. For that will use the existing copyright laws. We’re not replacing that or doing that in the AI Act. We’re just bringing that inside.”

The Commission proposed the draft AI legislation a full two years ago, laying out a risk-based approach for regulating applications of artificial intelligence and setting the bloc’s co-legislators, the parliament and the Council, the no-small-task of passing the world’s first horizontal regulation on AI.

Adoption of this planned EU AI rulebook is still a ways off. But progress is being made and agreement between MEPs and Member States on a final text could be hashed out by the end of the year, per Tudorache — who notes that Spain, which takes up the rotating six-month Council presidency in July, is eager to deliver on the file. Although he also concedes there are still likely to be plenty of points of disagreement between MEPs and Member States that will have to be worked through. So a final timeline remains uncertain. (And predicting how the EU’s closed-door trilogues will go is never an exact science.)

One thing is clear: The effort is timely — given how AI hype has rocketed in recent months, fuelled by developments in powerful generative AI tools, like DALL-E and ChatGPT.

The excitement around the boom in usage of generative AI tools that let anyone produce works such as written compositions or visual imagery just by inputting a few simple instructions has been tempered by growing concern over the potential for fast-scaling negative impacts to accompany the touted productivity benefits.

EU lawmakers have found themselves at the center of the debate — and perhaps garnering more global attention than usual — since they’re faced with the tricky task of figuring out how the bloc’s incoming AI rules should be adapted to apply to viral generative AI.  

The Commission’s original draft proposed to regulate artificial intelligence by categorizing applications into different risk bands. Under this plan, the bulk of AI apps would be categorized as low risk — meaning they escape any legal requirements. On the flip side, a handful of unacceptable risk use-cases would be outright prohibited (such as China-style social credit scoring). Then, in the middle, the framework would apply rules to a third category of apps where there are clear potential safety risks (and/or risks to fundamental rights) which are nonetheless deemed manageable.

The AI Act contains a set list of “high risk” categories which covers AI being used in a number of areas that touch safety and human rights, such as law enforcement, justice, education, employment healthcare and so on. Apps falling in this category would be subject to a regime of pre- and post-market compliance, with a series of obligations in areas like data quality and governance; and mitigations for discrimination — with the potential for enforcement (and penalties) if they breach requirements.

The proposal also contained another middle category which applies to technologies such as chatbots and deepfakes — AI-powered tech that raise some concerns but not, in the Commission’s view, so many as high risk scenarios. Such apps don’t attract the full sweep of compliance requirements in the draft text but the law would apply transparency requirements that aren’t demanded of low risk apps.

Being first to the punch drafting laws for such a fast-developing, cutting-edge tech field meant the EU was working on the AI Act long before the hype around generative AI went mainstream. And while the bloc’s lawmakers were moving rapidly in one sense, its co-legislative process can be pretty painstaking. So, as it turns out, two years on from the first draft the exact parameters of the AI legislation are still in the process of being hashed out.

The EU’s co-legislators, in the parliament and Council, hold the power to revise the draft by proposing and negotiating amendments. So there’s a clear opportunity for the bloc to address loopholes around generative AI without needing to wait for follow-on legislation to be proposed down the line, with the greater delay that would entail. 

Even so, the EU AI Act probably won’t be in force before 2025 — or even later, depending on whether lawmakers decide to give app makers one or two years before enforcement kicks in. (That’s another point of debate for MEPs, per Tudorache.)

He stresses that it will be important to give companies enough time to prepare to comply with what he says will be “a comprehensive and far reaching regulation”. He also emphasizes the need to allow time for Member States to prepare to enforce the rules around such complex technologies, adding: “I don’t think that all Member States are prepared to play the regulator role. They need themselves time to ramp up expertise, find expertise, to convince expertise to work for the public sector.

“Otherwise, there’s going to be such a disconnect between between the realities of the industry, the realities of implementation, and regulator, and you won’t be able to force the two worlds into each other. And we don’t want that either. So I think everybody needs that lag.”

MEPs are also seeking to amend the draft AI Act in other ways — including by proposing a centralized enforcement element to act as a sort of backstop for Member State-level agencies; as well as proposing some additional prohibited use-cases (such as predictive policing; which is an area where the Council may well seek to push back).

“We are changing fundamentally the governance from what was in the Commission text, and also what is in the Council text,” says Tudorache on the enforcement point. “We are proposing a much stronger role for what we call the AI Office. Including the possibility to have joint investigations. So we’re trying to put as sharp teeth as possible. And also avoid silos. We want to avoid the 27 different jurisdiction effect [i.e. of fragmented enforcements and forum shopping to evade enforcement].”

The EU’s approach to regulating AI draws on how it’s historically tackled product liability. This fit is obviously a stretch, given how malleable AI technologies are and the length/complexity of the ‘AI value chain’ — i.e. how many entities may be involved in the development, iteration, customization and deployment of AI models. So figuring out liability along that chain is absolutely a key challenge for lawmakers.

The risk-based approach also raises specific questions over how to handle the particularly viral flavor of generative AI that’s blasted into mainstream consciousness in recent months, since these tools don’t necessarily have a clear cut use-case. You can use ChatGPT to conduct research, generate fiction, write a best man’s speech, churn out marketing copy or pen lyrics to a cheesy pop song, for example — with the caveat that what it outputs may be neither accurate nor much good (and it certainly won’t be original).

Similarly, generative AI art tools could be used for different ends: As an inspirational aid to artistic production, say, to free up creatives to do their best work; or to replace the role of a qualified human illustrator with cheaper machine output.

(Some also argue that generative AI technologies are even more speculative; that they are not general purpose at all but rather inherently flawed and incapable; representing an amalgam of blunt-force investment that’s being imposed upon societies without permission or consent in a cripplingly-expensive and rights-trampling fishing expedition-style search for profit-making solutions.)

The core concern MEPs are seeking to tackle, therefore, is to ensure that underlying generative AI models like OpenAI’s GPT can’t just dodge risk-based regulation entirely by claiming they have no set purpose.

Deployers of generative AI models could also seek to argue they’re offering a tool that’s general purpose enough to escape any liability under the incoming law — unless there is clarity in the regulation about relative liabilities and obligations throughout the value chain.

One obviously unfair and dysfunctional scenario would be for all the regulated risk and liability to be pushed downstream, onto only the deployers of specific high risks apps. Since these entities would, almost certainly, be utilizing generative AI models developed by other/s upstream — so wouldn’t have access to the data, weights etc used to train the core model — which would make it impossible for them to comply with AI Act obligations, whether around data quality or mitigating bias.  

There was already criticism about this aspect of the proposal prior to the generative AI hype kicking off in earnest. But the speed of adoption of technologies like ChatGPT appears to have convinced parliamentarians of the need to amend the text to make sure generative AI does not escape being regulated.

And while Tudorache isn’t in a position to know whether the Council will align with the parliamentarians’ sense of mission here, he says he has “a feeling” they will buy in — albeit, most likely seeking to add their own “tweaks and bells and whistles” to how exactly the text tackles general purpose AIs.

In terms of next steps, once MEPs close their discussions on the file there will be a few votes in the parliament to adopt the mandate. (First two committee votes and then a plenary vote.)

He predicts the latter will “very likely” end up being taking place in the plenary session in early June — setting up for trilogue discussions to kick off with the Council and a sprint to get agreement on a text during the six months of the Spanish presidency. “I’m actually quite confident… we can finish with the Spanish presidency,” he adds. “They are very, very eager to make this the flagship of their presidency.”

Asked why he thinks the Commission avoided tackling generative AI in the original proposal, he suggests even just a couple of years ago very few people realized how powerful — and potentially problematic — these technology would become, nor indeed how quickly things could develop in the field. So it’s a testament to how difficult it’s getting for lawmakers to set rules around shapeshifting digital technologies which aren’t already out of date before they’ve even been through the democratic law-setting process.

Somewhat by chance, the timeline appears to be working out for the EU’s AI Act — or, at least, the region’s lawmakers have an opportunity to respond to recent developments. (Of course it remains to be seen what else might emerge over the next two years or so of generative AI which could freshly complicate these latest futureproofing efforts.)

Given the pace and disruptive potential of the latest wave of generative AI models, MEPs are sounding keen that others follow their lead — and Tudorache was one of a number of parliamentarians who put their names to an open letter earlier this week, calling for international efforts to cooperate on setting some shared principles for AI governance.

The letter also affirms MEPs’ commitment to setting “rules specifically tailored to foundational models” — with the stated goal of ensuring “human-centric, safe, and trustworthy” AI.

He says the letter was written in response to the open letter put out last month — signed by the likes of Elon Musk (who has since been reported to be trying to develop his own GPAI) — calling for a moratorium on development of any more powerful generative AI models so that shared safety protocols could be developed.

“I saw people asking, oh, where are the policymakers? Listen, the business environment is concerned, academia is concerned, and where are the policymakers — they’re not listening. And then I thought well that’s what we’re doing over here in Europe,” he tells TechCrunch. “So that’s why I then brought together my colleagues and I said let’s actually have an open reply to that.”

“We’re not saying that the response is to basically pause and run to the hills. But to actually, again, responsibly take on the challenge [of regulating AI] and do something about it — because we can. If we’re not doing it as regulators then who else would?” he adds.

Signing MEPs also believe the task of AI regulation is such a crucial one they shouldn’t just be waiting around in the hopes that adoption of the EU AI Act will led to another ‘Brussels effect’ kicking in in a few years down the line, as happened after the bloc updated its data protection regime in 2018 — influencing a number of similar legislative efforts in other jurisdictions. Rather this AI regulation mission must involve direct encouragement — because the stakes are simply too high.

“We need to start actively reaching out towards other like minded democracies [and others] because there needs to be a global conversation and a global, very serious reflection as to the role of this powerful technology in our societies, and how to craft some basic rules for the future,” urges Tudorache.

Unpicking the rules shaping generative AI

Europe spins up AI research hub to apply accountability rules on Big Tech

More TechCrunch

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

7 hours ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

9 hours ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android