AI

EU says incoming rules for general purpose AIs can evolve over time

Comment

Businessman touching the brain working of Artificial Intelligence (AI) Automation, Predictive analytics, Customer service AI-powered chatbot, analyze customer data, business and technology
Image Credits: Shutthiphong Chandaeng / Getty Images

The political deal clinched by European Union lawmakers late Friday over what the bloc is billing as world’s first comprehensive law for regulating artificial intelligence includes powers for the Commission to adapt the pan-EU AI rulebook to keep pace with developments in the cutting edge field, it has confirmed.

Lawmakers’ choice of term for regulating the most powerful models behind the current boom in generative AI tools — which the EU Act refers to as “general purpose” AI models and systems, rather than using industry terms of choice, like “foundational” or “frontier” models — was also selected with an eye on futureproofing the incoming law, per the Commission, with co-legislators favoring a generic term to avoid a classification that could be chained to use of a specific technology (i.e. transformer based machine learning).

“In the future, we may have different technical approaches. And so we were looking for a more generic term,” a Commission official suggested today. “Foundation models, of course, are part of the general purpose AI models. These are models that can be used for a very large variety of tasks, they can also be integrated in systems. To give you a concrete example, the general purpose AI model would be GPT-4 and the general purpose AI system would be ChatGPT — where GPT-4 is integrated in ChatGPT.”

As we reported earlier, the deal agreed by the bloc’s co-legislators includes a low risk tier and a high risk tier for regulating so-called general purpose AIs (GPAIs) — such as models behind the viral boom in generative AI tools like OpenAI’s ChatGPT. The trigger for high risk rules to apply on generative AI technologies is determined by an initial threshold set out in the law.

EU ‘final’ talks to fix AI rules to run into second day — but deal on foundational models is on the table

Also as we reported Thursday, the agreed draft of the EU AI Act references the amount of compute used to train the models, aka floating point operations (or FLOPs) — setting the bar for a GPAI to be considered to have “high impact capabilities” at 10^25 FLOPs.

But during a technical briefing with journalists today to review the political deal the Commission confirmed this is just an “initial threshold”, affirming it will have powers to update the threshold over time via implementing/delegating acts (i.e. secondary legislation). It also said the idea is for the FLOPs threshold to be combined, over time, with “other benchmarks” that will be developed by a new expert oversight body to be set up within the Commission, called the AI Office.

Why was 25 FLOPs selected as the high risk threshold for GPAIs? The Commission suggests the figure was picked with the intention of capturing current gen frontier models. However it claimed lawmakers did not discuss nor even considered whether it would apply to any models currently in play, such as OpenAI’s GPT-4 or Google’s Gemini, during the marathon trilogues to agree the final shape of the rulebook.

A Commission official added that it will, in any case, be up to makers of GPAIs to self assess whether their models meet the FLOPs threshold and, therefore, whether they fall under the rules for GPAIs “with systemic risk” or not.

“There are no official sources that will say ChatGPT or Gemini or Chinese models are at this level of FLOPs,” the official said during the press briefing. “On the basis of the information we have and with this 10^25 that we have chosen we have chosen a number that could really capture, a little bit, the frontier models that we have. Whether this is capturing GPT-4 or Gemini or others we are not here now to assert — because also, in our framework, it is the companies that would have to come and self assess what the amount of FLOPs or the computing capacity they have used. But, of course, if you read the scientific literature, many will point to these numbers as being very much the most advanced models at the moment. We will see what the companies will assess because they’re the best placed to make this assessment.”

“The rules have not been written keeping in mind certain companies,” they added. “They’ve really been written with the idea of defining the threshold — which, by the way, may change because we have the possibility to be empowered to change this threshold on the basis of technological evolution. It could go up, it could go down and we could also develop other benchmarks that in the future will be the more appropriate to benchmark the different moments.”

GPAIs that fall in the AI Act’s high risk tier will face ex ante-style regulatory requirements to assess and mitigate systemic risks — meaning they must proactively test model outputs to shrink risks of actual (or “reasonably foreseeable”) negative effects on public health, safety, public security, fundamental rights, or for society as a whole.

While “low tier” GPAIs will only face lighter transparency requirements, including obligations to apply watermarking to generative AI outputs.

The watermarking requirement for GPAIs falls in an article that was in the original Commission version of the risk-based framework, presented all the way back in April 2021, which focused on transparency requirements for technologies such as AI chatbots and deepfakes — but which will now also apply generally to general purpose AI systems.

“There is an obligation to try to watermark [generative AI-produced] text on the basis of the latest state of the art technology that is available,” the Commission official said, fleshing out details of the agreed watermarking obligations. “At the moment, technologies are much better at watermarking videos and audio than watermarking text. But what we ask is the fact that this watermarking takes place on the basis of state of the art technology — and then we expect, of course, that over time the technology will mature and will be as [good] as possible.”

GPAI model makers must also commit to respecting EU copyright rules, including complying with an existing machine readable opt-out from text and data mining contained in the EU Copyright Directive — and a carve-out of the Act’s transparency requirements for open source GPAIs does not extend to cutting them loose from the copyright obligations, with the Commission confirming the Copyright Directive will still apply on open source GPAIs.

As regards the AI Office, which will play a key role in setting risk classification thresholds for GPAIs, the Commission confirmed there’s no budget nor headcount defined for the expert body as yet. (Although, in the small hours of Saturday morning the bloc’s internal market commissioner, Thierry Breton, suggested the EU is set to welcome “a lot” of new colleagues as it tools up this general purpose AI oversight body.)

Asked about resourcing for the AI Office, a Commission official said it will be decided in the future by the EU’s executive taking “an appropriate and official decision”. “The idea is that we can create a dedicated budget line for the Office and that we will be able also to recruit the national experts from Member States if we wish to on top of contractual agents and on top of permanent staff. And some of these staff will also be deployed within the European Commission,” they added.

The AI Office will work in conjunction with a new scientific advisory panel the law will also establish to aid the body to better understand the capabilities of advanced AI models for the purpose of regulating systemic risk. “We have identified an important role for a scientific panel to be set up where the scientific panel can effectively help the Artificial Intelligence Office in understanding whether there are new risks that have not been yet identified,” the official noted. “And, for example, also flag some alerts about the models that are not captured by the FLOP threshold that for certain reasons could actually give rise to important risks that governments should should look at.”

While the EU’s executive seems keen to ensure key details of the incoming law are put out there in spite of there being no final text yet — because work to consolidate what was agreed by co-legislators during the marathon 38 hour talks that ended on Friday night is the next task facing the bloc over the coming weeks — there could still be some devils lurking in that detail. So it will be worth scrutinizing the text that emerges, likely in January or February.

Additionally, while the full regulation won’t be up and running for a few years the EU will be pushing for GPAIs to abide by codes of practice in the meanwhile — so AI giants will be under pressure to stick as close to the hard regulations coming down the pipe as possible, via the bloc’s AI Pact.

The EU AI Act itself likely won’t be in full force until some time in 2026 — given the final text must, once compiled (and translated into Member States’ languages), be affirmed by final votes in the parliament and Council, after which there’s a short period before the text of the law is published in the EU’s Official Journal and another before it comes into force.

EU lawmakers have also agreed a phased approach to the Act’s compliance demands, with 24 months allowed before the high risk rules will apply for GPAIs.

The list of strictly prohibited use-cases of AI will apply sooner, just six months after the law enters into force — which could, potentially, mean bans on certain “unacceptable risk” uses of AI, such as social scoring or Clearview AI-style selfie scraping for facial recognition databases, will get up and running in the second half of 2024, assuming no last minute opposition to the regulation springs up within the Council or Parliament. (For the full list of banned AI uses, read our earlier post.)

EU lawmakers bag late night deal on ‘global first’ AI rules

Google to work with Europe on stop-gap ‘AI Pact’

More TechCrunch

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

12 hours ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

13 hours ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android