A Frontier AI taskforce established by the U.K. back in June to prepare for the AI Safety Summit held this week is on course to be a permanent fixture, as the U.K. bids to take a leadership role on AI policy in the future. The U.K. Prime Minister Rishi Sunak today formally announced the launch of the AI Safety Institute, a “global hub based in the U.K. and tasked with testing the safety of emerging types of AI.”
The institute was informally announced last week in the lead up to this week’s summit. Now the government has confirmed that it will be led by Ian Hogarth — an investor, founder and engineer who had also chaired the taskforce — and that Yoshuo Bengio, one of the most prominent people in the field of AI, will be taking the lead on the production of its first report.
It’s not clear how much funding the government will inject into the AI Safety Institute, or whether industry players will be expected to foot some of the bill. The institute, which will sit underneath the Department of Science, Innovation and Technology, is described as “backed by leading AI companies” although that might be more in reference to endorsement rather than financial backing. We have reached out to the DSIT to ask and will update as we learn more.
The news comes alongside yesterday’s announcement of a new agreement, the Bletchley Declaration, which has been signed by all of the countries that have attended the summit and commits them to joined up testing and other commitments on assessing risks of “frontier AI” technologies, for example large language models.
“Until now, the only people testing the safety of new AI models have been the very companies developing them,” Sunak said in a meeting with journalists this evening. Citing work being done also by other countries, the UN and the G7 to address AI, now the plan will be to “work together on testing the safety of new AI models before they are released.”
All of this, to be sure, is still very much in its early stages. The U.K. has up to now resisted making moves to consider how to regulate AI technologies, both at the platform level and at more specific application levels, and some believe that without any teeth, the ideas of safety and quantifying risk are meaningless.
Sunak argued that it’s too early to regulate.
“The technology is developing at such a pace that governments have to make sure that we can keep up,” Sunak said in response to an accusation that he was being too light on legislation while going heavy on big ideas. “Before you start mandating things and legislating for things… you need to know exactly what you’re legislating for.”
While transparency seems to be a very clear aim of a lot of the long term efforts around this brave new world of technology, today’s series of meetings at Bletchley, day two of the summit, were very far from that ethos.
In addition to bilateral sessions with European Commission President Ursula von der Leyen and Secretary-General of the United Nations António Guterres, the summit today focused on two plenary sessions. Closed off to journalists beyond small pools watching as people assembled in rooms, attendees for these included the CEOs of DeepMind, OpenAI, Antrhopic, InflectionAI, Salesforce and Mistral, as well as the president of Microsoft and head of AWS. Among those representing governments, the line up included Sunak and U.S. Vice President Kamala Harris, as well as the Giorgia Meloni of Italy, and French minister of finance Bruno Le Maire.
Notably, although China was a much-touted guest during the first day, it did not make an appearance at the closed plenaries on day two.
Also absent at today’s sessions, it seems, was Elon Musk, the owner of X.ai (formerly known as Twitter). Sunak is due to have a fireside chat with him this evening on Musk’s social platform. Interestingly, that is not expected to be a live broadcast.