AI

Europe takes another big step toward agreeing an AI rulebook

Comment

Young Asian woman using face recognition software via smartphone, in front of colourful neon signboards in busy downtown city street at night. Biometric verification and artificial intelligence concept
Image Credits: d3sign / Getty Images

The European Parliament has voted to confirm its negotiating mandate for the AI Act — hitting a major milestone which unlocks the next stage of negotiations toward a pan-EU rulebook for artificial intelligence.

Parliamentarians backed an amended version of the Commission proposal that expands the rulebook in a way they say is aimed at ensuring AI that’s developed and used in Europe is “fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing”.

Among the changes MEPs have backed is a total ban on remote biometric surveillance and on predictive policing. They have also added a ban on “untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases” — so basically a hard prohibition on Clearview AI and its ilk.

The proposed ban on remote biometric surveillance would apply to both real-time or post (after the fact) applications of technologies like facial recognition, except, in the latter case, for law enforcement for the prosecution of serious crimes with judicial sigh off.

MEPs also added a ban on the use of emotional recognition tech being used by law enforcement, border agencies, workplaces and educational institutions.

Parliamentarians also expanded the classification of high-risk AI systems to include those that pose significant harm to people’s health, safety, fundamental rights or the environment, as well as AI systems used to influence voters and the outcome of elections.

Larger social media platforms that use algorithms to recommend content were also added to the high-risk list by MEPs.

The plenary vote follows committee backing for the amended proposal last month after MEPs from different political groups hashed out how they wanted to tweak the Commission text, including by adding obligations on makers of so-called general purpose AI.

Responding to fast-paced developments in generative AI, MEPs have supported putting a set of obligation on foundational/general purpose AI models, such as the technology that underpins OpenAI’s AI chatbot ChatGPT, requiring that such systems must identify and mitigate risks prior to placement on the market, as well as applying transparency disclosures to AI-generated content and implementing safeguards against illegal content being generated.

Makers of general purpose AIs must also publish “detailed summaries” of copyrighted information used to train their models under the MEPs’ proposal.

During a tour of European capitals to meet with lawmakers last month, OpenAI CEO Sam Altman was critical of this aspect of the EU proposal. He suggested the company might have to withdrew service in the regiona if it was unable to comply, telling journalists he was hopeful the obligations would be rolled back.

In the event, today’s plenary vote shows overwhelming support among parliamentarians for the amended version of the draft legislation — including the proposed obligations for general purpose AIs — with 499 votes in favour, and just 28 against (plus 93 abstentions).

The vote passing the mandate means discussions between the parliament and EU Member States governments can now kick off — with the first trilogue slated to take place this evening.

Commenting in a statement after the vote, co-rapporteur Brando Benifei said:

All eyes are on us today. While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose. We want AI’s positive potential for creativity and productivity to be harnessed but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with Council.

In another supporting statement, co-rapporteur Dragos Tudorache added:

The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law.

The version of the AI Act MEPs have backed today also adds exemptions for research activities and AI components provided under open source licenses, which MEPs suggest will ensure support for innovation — along with regulatory sandboxes for testing systems set to be established under the framework.

MEPs’ proposal also adds a suite of consumer rights over AI decision making — including the ability for consumers to ask for collective redress if an AI system has caused them harm.

The European consumer organization, BEUC, welcomed these changes but was critical of the parliament for not backing a total ban on use of emotional recognition AIs (since the proposal does not limit commercial use of such snake oil).

It also thinks MEPs have given developers too much discretion to decide whether their systems fall into the high risk category or not, which it says could undermine the efficacy of the risk-based framework.

That may prove one bone of contention during trilogue discussions which need to find a compromise between the position of the EU Council, which is the body composed of Member States governments, and lawmakers in the parliament to clinch the necessary political agreement on a final text and seal the file.

Typically, the EU Council takes a more pro-industry line while parliament tends to be more concerned with fundamental rights. So where the two sides will meet in the middle on regulating AI remains to be seen.

If they can’t agree the EU’s law-making process can stall — or even fail. But there’s an impetus in Brussels to get this file over the line given how much global attention is now fixed on regulating AI. (Being first to the punch with a democratic rulebook for AI presents opportunities for the bloc to exert influence beyond its borders as other jurisdictions scramble to figure out their own approaches to regulating a complex field of fast-developing technology.)

The Council adopted its position on the file back in December. At that time Member States largely favored deferring what to do about general purpose AI — to additional, implementing legislation. But, given what’s happened in the interim, with generative AI tools like ChatGPT shooting to center stage of discussion about the tech and generating multiple calls for regulation (including from plenty of tech industry types themselves), it will be interesting to see whether Member States will agree with MEPs on the need to add obligations for this class of AI systems to the text of the AI Act.

The EU’s executive presented the original proposal for the risk-based framework for AI back in April 2021. While that first Commission draft text did not grapple so extensively with the topic of general purpose AI, it did propose transparency provisions for chatbots and deepfake technology. So even back then EU lawmakers were taking the view that consumers should be informed they’re interacting with machine generated content.

While the Commission remains hopeful that trilogue talks on the AI Act file will deliver political agreement by the end of this year there will still be an implementation period — so the legislation will likely not apply before 2026.

This is why the EU is also working on several voluntary initiatives that aim to press AI firms to self regulate on safety in the meanwhile.

EU lawmakers back transparency and safety rules for generative AI

More TechCrunch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

22 hours ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

3 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

3 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info