AI

Europe’s AI Act talks head for crunch point

Comment

A GIF of a facial recognition system matching faces in a busy airport.
Image Credits: TechCrunch

Negotiations between European Union lawmakers tasked with reaching a compromise on a risk-based framework for regulating applications of artificial intelligence appear to be on a tricky knife edge.

Speaking during a roundtable yesterday afternoon, organized by the European Center for Not-For-Profit Law (ECNL) and the civil society association EDRi, Brando Benifei, MEP and one of the parliament’s co-rappoteurs for AI legislation, described talks on the AI Act as being at a “complicated” and “difficult” stage.

The closed door talks between EU co-legislators, or “trilogues” in the Brussels policy jargon, are how most European Union law gets made.

Issues that are causing division include prohibitions on AI practices (aka Article 5’s brief list of banned uses); fundamental rights impact assessments (FRIAs); and exemptions for national security practices, according to Benifei. He suggested parliamentarians have red-lines on all these issues and want to see movement from the Council — which, so far, is not giving enough ground.

“We cannot accept to move too much in the direction that would limit the protection of fundamental rights of citizens,” he told the roundtable. “We need to be clear, and we have been clear with the Council, we will not conclude [the file] in due time — we would be happy to conclude in the beginning of December — but we cannot conclude by conceding on these issues.”

Giving civil society’s assessment of the current state of play of the talks, Sarah Chander, senior policy adviser at EDRi was downbeat — running through a long list of key core civil society recommendations, aimed at safeguarding fundamental rights from AI overreach, which she suggested are being rebuffed by the Council.

For example, she said Member States are opposing a full ban on the use of remote biometrics ID systems in public; no agreement on registering the use of high risk AI systems by law enforcement and immigration authorities; no clear, loophole-proof risk classification process for AI systems; and no agreement on limiting the exports of prohibited systems outside the EU. She added that there are many other areas where it’s still unclear what lawmakers’ positions will be, such as sought for bans on biometric categorization and emotion recognition.

“We know that there is a lot of attention on how we are able to deliver an AI act that is able to protect fundamental rights and the democratic freedoms. So I think we need the real fundamental rights impact assessment,” Benifei added. “I think this is something we will be able to deliver. I’m convinced that we are on a good track on these negotiation. But I also want to be clear that we cannot accept to get an approach on the prohibitions that is giving too much [of a] free hand to the governments on very, very sensitive issues.”

The three-way discussions to hammer out the final shape of EU laws put parliamentarians and representatives of Member States governments (aka the European Council) in a room with the EU’s executive body, the Commission, which is responsible for presenting the first draft of proposed laws. But the process doesn’t always deliver the sought for “balanced” compromise — instead planned pan-EU legislation can get blocked by entrenched disagreements (such as in the case of the still stalled ePrivacy Regulation).

Trilogues are also notorious for lacking transparency. And in recent years there’s been rising concern that tech policy files have become a major target for industry lobbyists seeking to covertly influence laws that will affect them.

The AI file appears no different in that regard — except this time the industry lobbying pushing back on regulation appears to have come from both US giants and a smattering of European AI startups hoping to imitate the scale of rivals over the pond.

Lobbying on foundational models

Per Benifei, the question of how to regulate generative AI, and so-called foundational models, is another big issue dividing EU lawmakers as a result of heavy industry lobbying targeted at Member States’ governments. “This is another topic where we see a lot of pressure, a lot of lobbying that is clearly going on also on the side of the governments,” he said. “It’s legitimate — but also we need to maintain ambition.”

On Friday, Euractiv reported that a meeting involving a technical body of the European Council broke down after representatives of two EU Member States, France and Germany, pushed back against MEPs’ proposals for a tiered approach to regulate foundational models.

It reported that opposition to regulating foundational models is being led by French AI startup Mistral. Its report also named German AI start-up, Aleph Alpha, as actively lobbying governments to push-back on dedicated measures to target generative AI model makers.

EU lobby transparency not-for-profit, Corporate Europe Observatory, confirmed to TechCrunch France and Germany are two of the Member States pushing the Council for a regulatory carve out for foundational models.

“We have seen an extensive Big Tech lobbying of the AI Act, with countless meetings with MEPs and access to the highest levels of decision-making. While publicly these companies have called for regulating dangerous AI, in reality they are pushing for a laissez-faire approach where Big Tech decides the rules,” Corporate Europe Observatory’s Bram Vranken told TechCrunch.

“European companies including Mistral AI and Aleph Alpha have joined the fray. They have recently opened lobbying offices in Brussels and have found a willing ear with governments in France and Germany in order to obtain carve-outs for foundation models. This push is straining the negotiations and risks to derail the AI Act.

“This is especially problematic as the AI Act is supposed to protect our human rights against risky and biased AI systems. Corporate interests are now undermining those safeguards.”

Reached for a response to the charge of lobbying for a regulatory carve-out for foundational models, Mistral CEO Arthur Mensch did not deny it has been pressing lawmakers not to put regulatory obligations on upstream model makers. But he rejected the suggestion it is “blocking anything”.

“We have constantly been saying that regulating foundational models did not make sense and that any regulation should target applications, not infrastructure. We are happy to see that the regulators are now realizing it,” Mensch told TechCrunch.

Asked how, in this scenario, downstream deployers of foundational models would be able to ensure their apps are free of bias and other potential harms without the necessary access to the core model and its training data, he suggested: “The downstream user should be able to verify how the model works in its use case. As foundational model providers, we will provide the evaluation, monitoring and guardrailing tools to simplify these verifications.”

“To be clear, we’re very much in favour of regulating AI applications,” Mensch added. “The last version of the AI Act regulates foundational models in the worst possible manner since definitions are very imprecise, making the compliance weights enormous, whatever the model capacities. It effectively signals that small companies stand no chance due to the regulatory barrier and solidifies the large corporation dominance (while they are all US-based). We have been publicly vocal about this.”

Aleph Alpha was also contacted for comment on the reports of lobbying but at the time of writing it had not responded.

Reacting to reports of AI giants lobbying to water down EU AI rules, Max Tegmark, president of the Future of Life Institute, an advocacy organization with a particular focus on AI existential risk, sounded the alarm over possible regulatory capture.

“This last-second attempt by Big Tech to exempt the future of AI would make the EU AI Act the laughing-stock of the world, not worth the paper it’s printed on,” he told TechCrunch. “After years of hard work, the EU has the opportunity to lead a world waking up to the need to regulate these increasingly powerful and dangerous systems. Lawmakers must stand firm and protect thousands of European companies from the lobbying attempts of Mistral and US tech giants.”

Where the Council will land on foundational models remains unclear but pushback from powerful member states like France could lead to another impasse here if MEPs stick to their guns and demand accountability on upstream AI models makers.

An EU source close to the Council confirmed the issues Benifei highlighted remain “tough points” for Member States — which they said are showing “very little” flexibility, “if any”. Although our source, who was speaking on condition of anonymity because they’re not authorized to make public statements to the press, avoided explicitly stating the issues represent indelible red lines for the Council.

They also suggested there’s still hope for a conclusive trilogue on December 6 as discussions in the Council’s preparatory bodies continue and Member States look for ways to provide a revised mandate to the Spanish presidency.

Technical teams from the Council and Parliament are also continuing to work to try to find possible “landing zones” — in a bid to keep pushing for a provisional agreement at the next trilogue. However our source suggested it’s too early to say where exactly any potential intersections might be given how many sticking points remain (most of which they described as being “highly sensitive” for both EU institutions).

For his part, co-rapporteur Benifei said parliamentarians remain determined that the Council must give ground. If it does not, he suggested there’s a risk the whole Act could fail — which would have stark implications for fundamental rights in an age of exponentially increasing automation.

“The topic of the fundamental rights impact assessment; the issue of Article 5; the issue of the law enforcement [are] where we need to see more movement from the Council. Otherwise there will be a lot of difficulty to conclude because we we do not want an AI Act unable to protect fundamental rights,” he warned. “And so we will need to be strict on these.

“We have been clear. I hope there will be movement from the side of the governments knowing that we need some compromise otherwise we will not deliver any AI Act and that would be worse. We see how the governments are already experimenting with applications of the technology that is not respectful of fundamental rights. We need rules. But I think we also need to be clear on the principles.”

Fundamental rights impact assessments

Benifei sounded most hopeful that a compromise could be achieved on FRIAs, suggesting parliament’s negotiators are shooting for something “very close” to their original proposal.

MEPs introduced the concept as part of a package of proposed changes to the Commission draft legislation geared towards bolstering protections for fundamental rights. EU data protection law already features data protection impact assessments, which encourage data processors to make a proactive assessment of potential risks attached to handling people’s data.

The idea is FRIAs would aim to do something similarly proactive for applications of AI — nudging developers and deployers to consider up front how their apps and tools might interfere with fundamental democratic freedoms and take steps to avoid or mitigate potential harms.

“I have more worries about the positions regarding the law enforcement exceptions on which I think the Council needs to move much more,” Benifei went on, adding: “I’m very much convinced that it’s important that we keep the pressure from [civil society] on our governments to not stay on positions that would prevent the conclusion of some of these negotiations, which is not in the interest of anyone at this stage.”

Lidiya Simova, a policy advisor to MEP Petar Vitanov, who was also speaking at the roundtable, pointed out FRIAs had met with “a lot of opposition from private sector saying that this was going to be too burdensome for companies”. So while she said this issue hasn’t yet had “proper discussion” in trilogues, she suggested MEPs are anticipating more push back here too — such as an attempt to exempt private companies from having to conduct these assessments at all.

But, again, whether the parliament would accept such a watering down of an intended check and balance is “a longer shot”, in her view.

“The text that we had in our mandate was a bit downgraded to what we initially had in mind. So going further down from that… you risk getting to a point where you make it useless. You keep it in name, and in principle, but if it doesn’t accomplish anything — if it’s just a piece of paper that people just sign and say, oh, hey, I did a fundamental rights impact assessment — what’s the added value of that?” she posited. “For any obligation to be meaningful there have to be repercussions if you don’t meet the obligation.”

Simova also argued the scale of the challenge lawmakers are encountering with achieving accord on the AI file goes beyond individual disputed issues. Rather it’s structural, she suggested. “A bigger problem that we’re trying to solve, which is why it’s taken so long for the AI Act to come, is basically that you’re trying to safeguard fundamental rights with the product safety legislation,” she noted, referencing a long standing critique of the EU’s approach. “And that’s not very easy. I don’t even know whether it will be possible at the end of the day.

“That’s why there be so many amendments from the Parliament so many times, so many drafts going back and forth. That’s why we have such different notions on the topic.”

If the talks fail to achieve consensus the EU’s bid to be a world leader when it comes to setting rules for artificial intelligence could founder in light of a tightening timeline going into European elections next year.

Scramble to rule

Establishing a rulebook for AI was a priority set out by EU president Ursula von der Leyen, when she took up her post at the end of 2019. The Commission went on to propose a draft law in April 2021, after which the parliament and Council agreed on their respective negotiating mandates and the trilogues kicked off this summer — under Spain’s presidency of the European Council.

A key development filtering into talks between lawmakers this year has been the ongoing hype and attention garnered by generative AI, after OpenAI opened up access to its AI chatbot, ChatGPT, late last year — a democratizing of access which triggered an industry-wide race to embed AI into all sorts of existing apps, from search engines to productivity tools.

MEPs responded to the generative AI boom by tightening their conviction to introduce a comprehensive regulation of risks. But the tech industry pushed back — with AI giants combining the writing of eye-catching public letters warning about “extinction” level AI risks with private lobbying against tighter regulation of their current systems.

Sometimes the latter hasn’t even been done privately, such as in May when OpenAI’s CEO casually told a Time journalist that his company could “cease operating” in European Union if its incoming AI rules prove too arduous.

As noted above, if the AI file isn’t wrapped up next month there’s relatively limited time left in the EU’s calendar to work through tricky negotiations. European elections and new Commission appointments next year will reboot the make-up of the parliament and the college of commissioners respectively. So there’s a narrow window to clinch a deal before the bloc’s political landscape reforms.

There is also far more attention, globally, on the issue of regulating AI than when the Commission first proposed dashing ahead to lay down a risk-based framework. The window of opportunity for the EU to make good on its “rule maker, not rule taker” mantra in this area, and get a clean shot at influencing how other jurisdictions approach AI governance, also looks to be narrowing.

The next AI Act trilogue is scheduled for December 6; mark the date as these next set of talks could be make or break for the file.

If no deal is reached and disagreements are pushed on into next year there would only be a few months of negotiating time, under the incoming Belgian Council presidency, before talks would have to stop as the European Parliament dissolves ahead of elections in June. (Support for the AI file after that, given the political make-up of the parliament and Commission could look substantially different, and with the Council presidency due to pass to Hungary, cannot be predicted.)

The current Commission, under president von der Leyen, has chalked up multiple successes on passing ambitious digital regulations since getting to work in earnest in 2020, with lawmakers weighing in behind the Digital Services Act, Digital Markets Act, several data focused regulations and a flashy Chips Act, among others.

But reaching accord on setting rules for AI — perhaps the fastest moving cutting edge of tech yet seen — may prove a bridge too far for the EU’s well-oiled policymaking machine.

During yesterday’s roundtable delegates took a question from a remote participant that referenced the AI executive order issued by US president Joe Biden last month — wondering whether/how it might influence the shape of EU AI Act negotiations. There was no clear consensus on that but one attendee chipped in to offer the unthinkable: That the US might end up further ahead on regulating AI than the EU if the Council forces a carve-out for foundational models.

“We’re living in such a world that every time somebody says that they’re making a law regulat[ing] AI it has an impact for everyone else,” the speaker went on to offer, adding: “I actually think that existing legislations will have more impact on AI systems when they start to be properly enforced on AI. Maybe it’ll be interesting to see how other rules, existing rules like copyright rules, or data protection rules, are going to get applied more and more on the AI systems. And this will happen with or without AI Act.”

This report was updated with additional comment from Max Tegmark; and with further remarks from Mensch in response to our follow-up question. We also issued a correction as Bram Vranken works for Corporate Europe Observatory, not Lobbycontrol, as we originally reported  

Poland opens privacy probe of ChatGPT following GDPR complaint

Europe takes another big step toward agreeing an AI rulebook

More TechCrunch

Zen Educate, an online marketplace that connects schools with teachers, has raised $37 million in a Series B round of funding. The raise comes amid a growing teacher shortage crisis…

Zen Educate raises $37M and acquires Aquinas Education as it tries to address the teacher shortage

“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine.”

Scarlett Johansson says that OpenAI approached her to use her voice

A new self-driving truck — manufactured by Volvo and loaded with autonomous vehicle tech developed by Aurora Innovation — could be on public highways as early as this summer.  The…

Aurora and Volvo unveil self-driving truck designed for a driverless future

The European venture capital firm raised its fourth fund as fund as climate tech “comes of age.”

ETF Partners raises €284M for climate startups that will be effective quickly — not 20 years down the road

Copilot, Microsoft’s brand of generative AI, will soon be far more deeply integrated into the Windows 11 experience.

Microsoft wants to make Windows an AI operating system, launches Copilot+ PCs

Hello and welcome back to TechCrunch Space. For those who haven’t heard, the first crewed launch of Boeing’s Starliner capsule has been pushed back yet again to no earlier than…

TechCrunch Space: Star(side)liner

When I attended Automate in Chicago a few weeks back, multiple people thanked me for TechCrunch’s semi-regular robotics job report. It’s always edifying to get that feedback in person. While…

These 81 robotics companies are hiring

The top vehicle safety regulator in the U.S. has launched a formal probe into an April crash involving the all-electric VinFast VF8 SUV that claimed the lives of a family…

VinFast crash that killed family of four now under federal investigation

When putting a video portal in a public park in the middle of New York City, some inappropriate behavior will likely occur. The Portal, the vision of Lithuanian artist and…

NYC-Dublin real-time video portal reopens with some fixes to prevent inappropriate behavior

Longtime New York-based seed investor, Contour Venture Partners, is making progress on its latest flagship fund after lowering its target. The firm closed on $42 million, raised from 64 backers,…

Contour Venture Partners, an early investor in Datadog and Movable Ink, lowers the target for its fifth fund

Meta’s Oversight Board has now extended its scope to include the company’s newest platform, Instagram Threads, and has begun hearing cases from Threads.

Meta’s Oversight Board takes its first Threads case

The company says it’s refocusing and prioritizing fewer initiatives that will have the biggest impact on customers and add value to the business.

SeekOut, a recruiting startup last valued at $1.2 billion, lays off 30% of its workforce

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender SoLo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

2 days ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’