President Biden issues executive order to set standards for AI safety and security

U.S. President Joe Biden has issued an executive order (EO) that seeks to establish “new standards” for AI safety and security, including requirements for companies developing foundation AI models to notify the federal government and share results of all safety tests before they’re deployed to the public.

The fast-moving generative AI movement, driven by the likes of ChatGPT and foundation AI models developed by OpenAI, has sparked a global debate around the need for guardrails to counter the potential pitfalls of giving over too much control to algorithms. Back in May, G7 leaders identified key themes that need to be addressed as part of the so-called Hiroshima AI Process, with the seven constituent countries today reaching an agreement on guiding principles and a “voluntary” code of conduct for AI developers to follow.

Last week, the United Nations (UN) announced a new board to explore AI governance, while the U.K. is this week hosting its global summit on AI governance at Bletchley Park, with U.S. vice president Kamala Harris set to speak at the event.

The Biden-Harris Administration, for its part, has also been focusing on AI safety in lieu of anything legally binding, securing “voluntary commitments” from the major AI developers including OpenAI, Google, Microsoft, Meta and Amazon — this was always intended as a prelude to an executive order, though, which is what is being announced today.

“Safe, secure, and trustworthy AI”

Specifically, the order sets out that developers of the “most powerful AI systems” must share their safety test results and related data with the U.S. government.

“As AI’s capabilities grow, so do its implications for Americans’ safety and security,” the order notes, adding that it’s intended to “protect Americans from the potential risks of AI systems.”

Aligning the new AI safety and security standards with the Defense Production Act (1950), the order specifically targets any foundation model that might pose a risk to national security, economic security or public health — which, while somewhat open to interpretation, should cover just about any foundation model that comes to fruition.

“These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public,” the order adds.

Elsewhere, the order also outlines plans to develop various new tools and systems to ensure that AI is safe and trustworthy, with the National Institute of Standards and Technology (NIST) tasked with developing new standards “for extensive red-team testing” prior to release. Such tests will be applied across the board, with the Departments of Energy and Homeland Security addressing risks involved with AI and critical infrastructure, for example.

The order also serves to underpin a number of new directives and standards, including — but not limited to — protecting against the risks of using AI to engineer dangerous biological materials; protecting against AI-powered fraud and deception; and establishing a cybersecurity program to build AI tools for addressing vulnerabilities in critical software.

Teeth

It’s worth noting that the order does address areas such as equity and civil rights, pointing to how AI can exacerbate discrimination and bias in healthcare, justice and housing, as well as the dangers that AI poses in relation to things like workplace surveillance and job displacement. But some might interpret the order as lacking real teeth, as much of it seems to be centered around recommendations and guidelines — for instance, it says that it wants to ensure fairness in the criminal justice system by “developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.”

And while the executive order goes some way toward codifying how AI developers should go about building safety and security into their systems, it’s not clear to what extent it’s enforceable without further legislative changes. For example, the order discusses concerns around data privacy — after all, AI makes it infinitely more easy to extract and exploit individuals’ private data at scale, something that developers might be incentivized to do as part of their model training processes. However, the executive order merely calls on Congress to pass “bipartisan data privacy legislation” to protect Americans’ data, including requesting more federal support to develop privacy-preserving AI development techniques.

With Europe on the cusp of passing the first extensive AI regulations, it’s clear that the rest of the world is also grappling with ways to contain what is set to create one of the greatest societal disruptions since the industrial revolution. How impactful President Biden’s executive order proves to be in reeling in the likes of OpenAI, Google, Microsoft and Meta remains to be seen.