UK gov’t urged against delay in setting AI rulebook as MPs warn policymakers aren’t keeping up

A U.K. parliamentary committee that’s investigating the opportunities and challenges unfolding around artificial intelligence has urged the government to reconsider its decision not to introduce legislation to regulate the technology in the short term — calling for an AI bill to be a priority for ministers.

The government should be moving with “greater urgency” when it comes to legislating to set rules for AI governance if ministers’ ambitions to make the U.K. an AI safety hub are to be realized, committee chair, Greg Clark, writes in a statement today accompanying publication of an interim report which warns the approach it has adopted so far “is already risking falling behind the pace of development of AI”.

“The government is yet to confirm whether AI-specific legislation will be included in the upcoming King’s Speech in November. This new session of Parliament will be the last opportunity before the General Election for the UK to legislate on the governance of AI,” the committee also observes, before going on to argue for “a tightly-focussed AI Bill” to be introduced in the new session of parliament this fall.

“Our view is that this would help, not hinder, the prime minister’s ambition to position the UK as an AI governance leader,” the report continues. “We see a danger that if the UK does not bring in any new statutory regulation for three years it risks the government’s good intentions being left behind by other legislation — like the EU AI Act — that could become the de facto standard and be hard to displace.”

It’s not the first such warning over the government’s decision to defer legislating on AI. A report last month by the independent research-focused Ada Lovelace Institute called out contradictions in ministers’ approach — pointing out that, on the one hand, the government is pitching to position the U.K. as a global hub for AI safety research while, on the other, proposing no new laws for AI governance and actively pushing to deregulate existing data protection rules in a way the Institute suggests is a risk to its AI safety agenda. 

Back in March the government set out its preference for not introducing any new legislation to regulate artificial intelligence in the short term — touting what it branded a “pro-innovation” approach based on setting out some flexible “principles” to regulate use of the tech. Existing U.K. regulatory bodies would be expected to pay attention to AI activity where it intersects with their areas, per the plan — just without getting any new powers nor extra resources.

The prospect of AI governance being dumped onto the U.K.’s existing (over-stretched) regulatory bodies without any new powers or formally legislated duties has clearly raised concerns among MPs scrutinizing the risks and opportunities attached to rising uptake of automation technologies.

The Science, Innovation and Technology Committee’s interim report sets out what it dubs twelve challenges of AI governance that it says policymakers must address, including bias, privacy, misrepresentation, explainability, IP and copyright, and liability for harms; as well as issues related to fostering AI development — such as data access, compute access and the open source vs proprietary code debate.

The report also flags challenges related to employment, as growing use of automation tools in the workplace is likely to disrupt jobs; and emphasizes the need for international coordination/global cooperation on AI governance. It even includes a reference to “existential” concerns pumped up by a number of high profile technologists in recent times — making headline-grabbing claims that AI “superintelligence” could pose a threat to humanity’s continued existence. (“Some people think that AI is a major threat to human life,” the committee observes in its twelfth bullet point. “If that is a possibility, governance needs to provide protections for national security.”)

Judging by the list it’s compiled in the interim report, the committee appears to be taking a comprehensive look at challenges posed by AI. However its members seem less convinced the U.K. government is as all over the detail of this topic.

“The UK government’s proposed approach to AI governance relies heavily on our existing regulatory system, and the promised central support functions. The time required to establish new regulatory bodies means that adopting a sectoral approach, at least initially, is a sensible starting point. We have heard that many regulators are already actively engaged with the implications of AI for their respective remits, both individually and through initiatives such as the Digital Regulation Cooperation Forum. However, it is already clear that the resolution of all of the Challenges set out in this report may require a more well-developed central coordinating function,” they warn.

The report goes on to suggest the government (at least) establishes “‘due regard’ duties for existing regulators” in the aforementioned AI bill they also recommend be introduced as a matter of priority.

Another call the report makes is for ministers to undertake a “gap analysis” of U.K. regulators — that looks not only at “resourcing and capacity but whether any regulators require new powers to implement and enforce the principles outlined in the AI white paper” — which is something the Ada Lovelace Institute’s report also flagged as a threat to the government’s approach delivering effective AI governance.

“We believe that the UK’s depth of expertise in AI and the disciplines which contribute to it — the vibrant and competitive developer and content industry that the UK is home to; and the UK’s longstanding reputation for developing trustworthy and innovative regulation — provides a major opportunity for the UK to be one of the go-to places in the world for the development and deployment of AI. But that opportunity is time-limited,” the report argues in its concluding remarks. “Without a serious, rapid and effective effort to establish the right governance frameworks — and to ensure a leading role in international initiatives — other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer.

“We urge the government to accelerate, not to pause, the establishment of a governance regime for AI, including whatever statutory measures as may be needed.”

Earlier this summer, prime minister Rishi Sunak took a trip to Washington to drum up U.S. support for an AI safety summit his government announced it would host this autumn. Although the initiative came a few months after the government’s AI white paper had sought to down play risks while hyping the potential for the tech to grow the economy. And Sunak’s sudden interest in AI safety seems to have been sparked after a handful of meetings this summer with AI industry CEOs, including OpenAI’s Sam Altman, Google-DeepMind’s Demis Hassabis and Anthropic’s Dario Amodei.

The U.S. AI giants’ talking points on regulation and governance have largely focused on talking up theoretical future risks, from so-called artificial superintelligence, rather than encouraging policymakers to direct their attention toward the full spectrum of AI harms that are happening in the here and now. Whether bias, privacy or copyright harms, or — indeed — issues of digital market concentration which risk AI advancements locking in another generation of U.S. tech giants as our inescapable overlords.

Critics argue the AI giants’ tactic is to lobby for self-serving regulation that creates a competitive moat for their businesses by artificially restricting access to AI models and/or dampening others’ ability to build rival tech — while also doing the self-serving work of distracting policymakers from passing (or indeed enforcing) legislation that addresses real-world AI harms their tools are already causing.

The committee’s concluding remarks appear alive to this concern, too. “Some observers have called for the development of certain types of AI models and tools to be paused, allowing global regulatory and governance frameworks to catch up. We are unconvinced that such a pause is deliverable. When AI leaders say that new regulation is essential, their calls cannot responsibly be ignored –although it should also be remembered that is not unknown for those who have secured an advantageous position to seek to defend it against market insurgents through regulation,” the report notes.

We’ve reached out to the Department for Science, Innovation and Technology for a response to the committee’s call for an AI bill to be introduced in the new session of parliament.

Update: A spokesperson for the department sent us this statement:

AI has enormous potential to change every aspect of our lives, and we owe it to our children and our grandchildren to harness that potential safely and responsibly.

That’s why the UK is bringing together global leaders and experts for the world’s first major global summit on AI safety in November — driving targeted, rapid international action on the guardrails needed to support innovation while tackling risks and avoiding harms.

Our AI Regulation White Paper sets out a proportionate and adaptable approach to regulation in the UK, while our Foundation Model Taskforce is focused on ensuring the safe development of AI models with an initial investment of £100 million — more funding dedicated to AI safety than any other government in the world.

The government also suggested it may go further, describing the AI regulation white paper as a first step in addressing the risks and opportunities presented by the technology. It added that it plans to review and adapt its approach in response to the fast pace of developments in the field.