Europe lays out plan for risk-based AI rules to boost trust and uptake

European Union lawmakers have presented their risk-based proposal for regulating high risk applications of artificial intelligence within the bloc’s single market.

The plan includes prohibitions on a small number of use-cases that are considered too dangerous to people’s safety or EU citizens’ fundamental rights, such as a China-style social credit scoring system or AI-enabled behavior manipulation techniques that can cause physical or psychological harm. There are also restrictions on law enforcement’s use of biometric surveillance in public places — but with very wide-ranging exemptions.

Most uses of AI won’t face any regulation (let alone a ban) under the proposal. But a subset of so-called “high risk” uses will be subject to specific regulatory requirements, both ex ante (before) and ex post (after) launching into the market.

There are also transparency requirements for certain use-cases of AI — such as chatbots and deepfakes — where EU lawmakers believe that potential risk can be mitigated by informing users they are interacting with something artificial.

The planned law is intended to apply to any company selling an AI product or service into the EU, not just to EU-based companies and individuals — so, as with the EU’s data protection regime, it will be extraterritorial in scope.

The overarching goal for EU lawmakers is to foster public trust in how AI is implemented to help boost uptake of the technology. Senior Commission officials talk about wanting to develop an “excellence ecosystem” that’s aligned with European values.

“Today, we aim to make Europe world-class in the development of a secure, trustworthy and human-centered Artificial Intelligence, and the use of it,” said Commission EVP, Margrethe Vestager, announcing adoption of the proposal at a press conference.

“On the one hand, our regulation addresses the human and societal risks associated with specific uses of AI. This is to create trust. On the other hand, our coordinated plan outlines the necessary steps that Member States should take to boost investments and innovation. To guarantee excellence. All this, to ensure that we strengthen the uptake of AI across Europe.”

Under the proposal, mandatory requirements are attached to a “high risk” category of applications of AI — meaning those that present a clear safety risk or threaten to impinge on EU fundamental rights (such as the right to non-discrimination).

Examples of high risk AI use-cases that will be subject to the highest level of regulation on use are set out in annex 3 of the regulation — which the Commission said it will have the power to expand by delegated acts, as use-cases of AI continue to develop and risks evolve.

For now cited high risk examples fall into the following categories: Biometric identification and categorisation of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, workers management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Administration of justice and democratic processes.

Military uses of AI are specifically excluded from scope as the regulation is focused on the bloc’s internal market.

The makers of high risk applications will have a set of ex ante obligations to comply with before bringing their product to market, including around the quality of the data-sets used to train their AIs and a level of human oversight over not just design but use of the system — as well as ongoing, ex post requirements, in the form of post-market surveillance.

Other requirements include a need to create records of the AI system to enable compliance checks and also to provide relevant information to users. The robustness, accuracy and security of the AI system will also be subject to regulation.

Commission officials suggested the vast majority of applications of AI will fall outside this highly regulated category. Makers of those ‘low risk’ AI systems will merely be encouraged to adopt (non-legally binding) codes of conduct on use.

Penalties for infringing the rules on specific AI use-case bans have been set at up to 6% of global annual turnover or €30M (whichever is greater). While violations of the rules related to high risk applications can scale up to 4% (or €20M).

Enforcement will involve multiple agencies in each EU Member State — with the proposal intending oversight be carried out by existing (relevant) agencies, such as product safety bodies and data protection agencies.

That raises immediate questions over adequate resourcing of national bodies, given the additional work and technical complexity they will face in policing the AI rules; and also how enforcement bottlenecks will be avoided in certain Member States. (Notably, the EU’s General Data Protection Regulation is also overseen at the Member State level and suffers from lack of uniformly vigorous enforcement.)

But the Commission does appear to have wised up to the risk of enforcement blockages: Article 37 of the proposal gives the EU executive power to investigate cases where “there are reasons to doubt whether a notified body complies with the requirements laid down in Article 33”. And also the power to “adopt a reasoned decision” where a Member State agency has failed to meet its obligations. 

There will also be an EU-wide database set up to create a register of high risk systems implemented in the bloc (which will be managed by the Commission).

A new body, called the European Artificial Intelligence Board (EAIB), will also be set up to support a consistent application of the regulation — in a mirror to the European Data Protection Board which offers guidance for applying the GDPR.

In step with rules on certain uses of AI, the plan includes measures to co-ordinate EU Member State support for AI development, under a 2021 update to the EU’s 2018 Coordinated Plan — such as by establishing regulatory sandboxes and co-funding Testing and Experimentation Facilities to help startups and SMEs develop and accelerate AI-fuelled innovations; and by establishing a network of European Digital Innovation Hubs intended as ‘one-stop shops’ to help SMEs and public administrations become more competitive in this area — and via the prospect of targeted EU funding to support homegrown AI.

Internal market commissioner Thierry Breton said investment is a crucial piece of the plan. “Under our Digital Europe and Horizon Europe program we are going to free up a billion euros per year. And on top of that we want to generate private investment and a collective EU-wide investment of €20BN per year over the coming decade — the ‘digital decade’ as we have called it,” he said during today’s press conference. “We also want to have €140BN which will finance digital investments under Next Generation EU [COVID-19 recovery fund] — and going into AI in part.”

Shaping rules for AI has been a key priority for EU president Ursula von der Leyen who took up her post at the end of 2019. A white paper was published last year, following a 2018 AI for EU strategy — and Vestager said that today’s proposal is the culmination of three years’ work.

Breton suggested that providing guidance for businesses to apply AI will give them legal certainty and Europe an edge.

“Trust… we think is vitally important to allow the development we want of artificial intelligence,” he said. [Applications of AI] need to be trustworthy, safe, non-discriminatory — that is absolutely crucial — but of course we also need to be able to understand how exactly these applications will work.”

“What we need is to have guidance. Especially in a new technology… We are, we will be, the first continent where we will give guidelines — we’ll say ‘hey, this is green, this is dark green, this is maybe a little bit orange and this is forbidden’. So now if you want to use artificial intelligence applications, go to Europe! You will know what to do, you will know how to do it, you will have partners who understand pretty well and, by the way, you will come also in the continent where you will have the largest amount of industrial data created on the planet for the next ten years.

“So come here — because artificial intelligence is about data — we’ll give you the guidelines. We will also have the tools to do it and the infrastructure.”

A version of today’s proposal leaked last week — leading to calls by MEPs to beef up the plan, such as by banning remote biometric surveillance in public places.

In the event the final proposal does treat remote biometric surveillance as a particularly high risk application of AI — and there is a prohibition in principal on the use of the technology in public by law enforcement.

However use is not completely proscribed, with a number of exceptions where law enforcement would still be able to make use of it, subject to a valid legal basis and appropriate oversight.

Protections attacked as too weak

Reactions to the Commission’s proposal included plenty of criticism of overly broad exemptions for law enforcement’s use of remote biometric surveillance (such as facial recognition tech) as well as concerns that measures in the regulation to address the risk of AI systems discriminating don’t go nearly far enough.

Criminal justice NGO, Fair Trials, said radical improvements are needed if the regulation is to contain meaningful safeguards in relation to criminal justice. Commenting in a statement, Griff Ferris, legal and policy officer for the NGO said: “The EU’s proposals need radical changes to prevent the hard-wiring of discrimination in criminal justice outcomes, protect the presumption of innocence and ensure meaningful accountability for AI in criminal justice. 

“The legislation lacks any safeguards against discrimination, while the wide-ranging exemption for ‘safeguarding public security’ completely undercuts what little safeguards there are in relation to criminal justice. The framework must include rigorous safeguards and restrictions to prevent discrimination and protect the right to a fair trial. This should include restricting the use of systems that attempt to profile people and predict the risk of criminality.” 

The Civil Liberties Union for Europe (Liberties) also hit out at loopholes that the NGO said would allow EU Member States to get around bans on problematic uses of AI.

“There are way too many problematic uses of the technology that are allowed, such as the use of algorithms to forecast crime or to have computers assess the emotional state of people at border control, both of which constitute serious human rights risks and pose a threat to the values of the EU,” warned Orsolya Reich, senior advocacy officer, in a statement. “We are also concerned that the police could use facial recognition technology in ways that endanger our fundamental rights and freedoms.”

Patrick Breyer, German Pirate MEP, warned that the proposal falls short of meeting the claimed bar of respect for ‘European values’. The MEP was one of 40 who signed a letter to the Commission last week warning then that a leaked version of the proposal didn’t go far enough in protecting fundamental rights.

“We must seize the opportunity to let the European Union bring artificial intelligence in line with ethical requirements and democratic values. Unfortunately, the Commission’s proposal fails to protect us from the dangers gender justice and equal treatment of all groups, such as through facial recognition systems or other kinds of mass surveillance,” said Breyer in a statement reacting to the formal proposal today.

“Biometric and mass surveillance, profiling and behavioural prediction technology in our public spaces undermines our freedom and threatens our open societies. The European Commission’s proposal would bring the high-risk use of automatic facial recognition in public spaces to the entire European Union, contrary to the will of the majority of our people. The proposed procedural requirements are a mere smokescreen. We cannot allow the discrimination of certain groups of people and the false incrimination of countless individuals by these technologies”

European digital rights group, Edri, also highlighted what it dubbed a “worrying gap” in the proposal around “discriminatory and surveillance technologies”. “The regulation allows too wide a scope for self-regulation by companies profiting from AI. People, not companies need to be the centre of this regulation,” said Sarah Chander, senior policy lead on AI at Edri, in a statement.

Access Now raised similar concerns in an initial reaction, saying the proposed prohibitions are “too limited”, and the legal framework “does nothing to stop the development or deployment of a host of applications of AI that drastically undermine social progress and fundamental rights”.

But the digital rights group welcomed transparency measures such as the publicly accessible database of high risk systems to be established — and the fact the regulation does include some prohibitions (albeit, which it said don’t go far enough).

Consumer rights umbrella group, BEUC, was also swiftly critical of the proposal — attacking the Commission proposal as weak on consumer protection because it focuses on regulating “a very limited range of AI uses and issues”.

“The European Commission should have put more focus on helping consumers trust AI in their daily lives,” said Monique Goyens, Beuc director general, in a statement: “People should be able to trust any product or service powered by artificial intelligence, be it ‘high-risk’, ‘medium-risk’ or ‘low-risk’. The EU must do more to ensure consumers have enforceable rights, as well as access to redress and remedies in case something goes wrong.”

New rules on machinery are also part of the legislative package — with adapted safety rules intended to take account of AI-fuelled changes (with the Commission saying it wants businesses which are integrating AI into machinery to only need to carry out one conformity assessment to comply with the framework).

Tech industry group Dot Europe (formerly Edima) — whose members include Airbnb, Apple, Facebook, Google, Microsoft and other platform giants — welcomed the release of the Commission’s AI proposal but had yet to offer detailed remarks at the time of writing, saying it was formulating its position.

While startup advocacy group, Allied For Startups, told us it also needs time to study the detail of the proposal, Benedikt Blomeyer, its EU policy director, warned over the potential risk of burdening startups. “Our initial reaction is that if done wrong, this could significantly increase the regulatory burden placed on startups,” he said. “The key question for this proposal will be whether it is proportionate to the potential risks that AI poses whilst ensuring that European startups can also take advantage of its potential benefits”.

Other tech lobby groups didn’t wait to go on the attack at the prospect of bespoke red tape wrapping AI — claiming the regulation would “kneecap the EU’s nascent AI industry before it can learn to walk” as one Washington- and Brussels-based tech policy thinktank (the Center for Data Innovation) put it.

The CCIA trade association also quickly warned against “unnecessary red tape for developers and users”, adding that regulation alone won’t make the EU a leader in AI.

Today’s proposal kicks off the start of plenty of debate under the EU’s co-legislative process, with the European Parliament and Member States via the EU Council needing to have their say on the draft — meaning a lot could change before EU institutions reach agreement on the final shape of a pan-EU AI regulation.

Commissioners declined to give a timeframe for when legislation might be adopted today, saying only that they hoped the other EU institutions would engage immediately and that the process could be done asap. It could, nonetheless, be several years before the regulation is ratified and comes in force.

This report was updated with reactions to the Commission proposal, and with additional detail about the proposed enforcement structure (Article 37)