Google to work with Europe on stop-gap ‘AI Pact’

Google’s Sundar Pichai has agreed to work with lawmakers in Europe on what’s being referred to as an “AI Pact” — seemingly a stop-gap set of voluntary rules or standards while formal regulations for applying AI are still being worked on.

Pichai was meeting with Thierry Breton, the European Union’s internal market commissioner, who put out a statement after today’s confab — saying: “There is no time to lose in the AI race to build a safe online environment.”

A briefing put out by his office after the meeting also said the EU wants to be “proactive” and work on an AI pact ahead of incoming EU legislation set to apply to AI.

The memo added that the bloc wants to launch an AI Pact “involving all major European and non-European AI actors on a voluntary basis” and ahead of the legal deadline of the aforementioned pan-EU AI Act.

However — at present — the only tech giant’s name that’s been publicly attached to the initiative is Google’s.

We’ve reached out to Google and the European Commission with questions about the initiative.

In further public remarks, Breton said:

We expect technology in Europe to respect all of our rules, on data protection, online safety, and artificial intelligence. In Europe, it’s not pick and choose.

I am pleased that Sundar Pichai recognises this, and that he is committed to complying with all EU rules.

The GDPR [General Data Protection Regulation] is in place. The DSA [Digital Services Act] and DMA [Digital Markets Act] are being implemented. Negotiations on the AI Act are approaching the final stage and I call on the European Parliament and Council to adopt the framework before the end of the year.

 Sundar and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI Pact on a voluntary basis ahead of the legal deadline.

I also welcome Sundar’s commitment to step up the fight against disinformation ahead of elections in Europe.

While there’s no details on what might be contained in the “AI pact”, as with any self-regulatory arrangement, it would lack legal bite so there would be no way to force developers to sign up — nor consequences for any failing to meet the (voluntary) commitments.

Still, it’s perhaps a step towards the kind of international cooperation on rule-making that’s been called for in recent weeks and months by a number of technologists.

The EU has past precedent when it comes to getting tech giants to ink their name to a little self-regulation: Having established, over several years, a couple of voluntary agreements (aka Codes) which a number of tech giants signed up to (including Google), committing to improve their responses to reports of online hate speech and the spread of harmful disinformation. And while the two aforementioned Codes haven’t resolved what remain complex online speech moderation issues, they have provided a stick for the EU to measure whether or not platforms are living up to their own claims — and, at times, use to dish out a light public beating when they’re not.

More generally, the EU remains ahead of the global pack on digital rule-making and has already drafted regulations for artificial intelligence — proposed a risk-based framework for AI apps two years ago. However even the bloc’s best efforts are still lagging developments in the field which have felt especially blistering this year, after OpenAI’s generative AI chatbot, ChatGPT, had been made broadly available to web users and garnered viral attention.

Currently, the draft EU AI Act, proposed back in April 2021, remains a live piece of lawmaking between the European parliament and Council — with the former recently agreeing on a raft of amendments they want included, including several targeting generative AI.

A compromise on a final text will need to be reached between EU co-legislators so it remains to be seen what final shape the bloc’s AI rulebook will take.

Plus, even if the law gets adopted before the end of the year, which is the most optimistic timeline, it will certainly come with an implementation period — of most likely at least a year before it applies to AI developers. Hence why EU commissioners are keenly pressing for stop-gap measures. 

Earlier this week, EVP Margrethe Vestager, who heads up the bloc’s digital strategy, suggested the EU and U.S. were set to cooperate on establishing minimum standards before legislation enters into force (via Reuters).

In further remarks today, following the meeting with Google, she tweeted: “We need the AI Act as soon as possible, But AI technology evolves at extreme speed. So we need voluntary agreement on universal rules for AI now.”

Elaborating on Vestager’s comment, a Commission spokesperson said: “At the G7 digital ministerial in Takasaki Japan on 29-30 April, EVP Vestager proposed internationally agreed guardrails on AI that companies can comply with voluntarily until the AI Act is in force in the EU. This proposal was picked up by G7 leaders, who last Saturday agreed in their Communique to launch the ‘Hiroshima AI Process’, with the aim of designing such guardrails, in particular for generative AI.”

Despite these sudden expressions of high level haste, it’s worth noting that the EU’s existing data protection rulebook, the GDPR, may apply — and has already been applied against certain AI apps, including ChatGPT, Replika and and Clearview AI to name three. For example, a regulation intervention on ChatGPT in Italy at the end of March briefly led to a service suspension which was followed by OpenAI producing new disclosures and controls for users in an apparent bid to comply with privacy rules.

Add to that, as Breton notes, the incoming DSA and DMA may create further hard requirements that AI app makers will need to abide by, depending on the nature of their services, in the coming months and years as those rules start to apply on digital services, platforms and the most market-shaping tech giants (in the case of the DMA).

Nonetheless the EU remains convinced of the need for dedicated risk-based rules for AI. And, it seems, the Commission is keen to double down on the slated ‘Brussels effect’ its digital lawmaking can attract by announcing an stop-gap AI pact now.

In recent weeks and months, U.S. lawmakers have also been turning their attention to the fraught question of how best to regulate AI — with a Senate committee recently holding a hearing in which it took testimony from OpenAI’s CEO Sam Altman, asking him for his thoughts on how to regulate the technology.

Google may be hoping to play the other side by rushing to work with the EU on voluntary standards. Let the AI regulation arms race begin!

Update: A spokesman for Breton’s office confirmed Google is the first and only company involved with the AI Pact at this early stage.

“At th[is] stage the first company is Google as we just announced the idea of the Pact,” he said. “It will be an evolving process with companies able to sign up gradually and commitment levels may evolve.”

This report was updated with additional remarks by Vestager and Breton’s office