AI

UK’s approach to AI safety lacks credibility, report warns

Comment

Image Credits: Ian Vogler / Getty Images

In recent weeks, the U.K. government has been trying to cultivate an image of itself as an international mover and shaker in the nascent field of AI safety — dropping a flashy announcement of an upcoming summit on the topic last month, along with a pledge to spend £100 million on a foundational model task force that will do “cutting-edge” AI safety research, as it tells it.

Yet the self-same government, led by U.K. prime minister and Silicon Valley superfan Rishi Sunak, has eschewed the need to pass new domestic legislation to regulate applications of AI — a stance its own policy paper on the topic brands “pro-innovation.”

It is also in the midst of passing a deregulatory reform of the national data protection framework that risks working against AI safety.

The latter is one of several conclusions by the independent research-focused Ada Lovelace Institute, a part of the Nuffield Foundation charitable trust, in a new report examining the U.K.’s approach to regulating AI that makes for diplomatic-sounding but, at times, pretty awkward reading for ministers.

The report packs a full 18 recommendations for leveling up government policy/credibility in this area — that is, if the U.K. wants to be taken seriously on the topic.

The Institute advocates for an “expansive” definition of AI safety — “reflecting the wide variety of harms that are arising as AI systems become more capable and embedded in society.” So the report is concerned with how to regulate harms that “AI systems can cause today.” Call them real-world AI harms. (Not with sci-fi-inspired theoretical possible future risks, which have been puffed up by certain high-profile figures in the tech industry of late, seemingly in a bid to attention-hack policymakers.)

For now, it’s fair to say Sunak’s government’s approach to regulating (real-world) AI safety has been contradictory — heavy on flashy, industry-led PR claiming it wants to champion safety but light on policy proposals for setting substantive rules to guard against the smorgasbord of risks and harms we know can flow from ill-judged applications of automation.

Here’s the Ada Lovelace Institute dropping the primary truth bomb:

The UK Government has laid out its ambition to make the UK an “AI superpower,” leveraging the development and proliferation of AI technologies to benefit the UK’s society and economy, and hosting a global summit in autumn 2023. This ambition will only materialise with effective domestic regulation, which will provide the platform for the UK’s future AI economy.

The report’s laundry list of recommendations goes on to make it clear the Institute sees a lot of room for improvement on the U.K.’s current approach to AI. 

Earlier this year, the government published its preferred approach to regulating AI domestically — saying it didn’t see the need for new legislation or oversight bodies at this stage. Instead the white paper offered a set of flexible principles the government suggested existing, sector specific (and/or cross-cutting) regulators should “interpret and apply to AI within their remits.” Just without any new legal powers or extra funding for also overseeing novel uses of AI.

The five principles set out in the white paper are safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. All of these sound fine on paper — but paper alone clearly isn’t going to cut it when it comes to regulating AI safety.

The U.K.’s plan to let existing regulators figure out what to do about AI with just some broad-brush principles to aim for and no new resource contrasts with that of the EU where lawmakers are busy hammering out an agreement on a risk-based framework that the bloc’s executive proposed back in 2021.

Europe takes another big step toward agreeing an AI rulebook

The U.K.’s shoestring budget approach of saddling existing, overworked regulators with new responsibilities for eyeing AI developments on their patch without any powers to enforce outcomes on bad actors doesn’t look very credible on AI safety, to put it mildly.

It doesn’t even seem a coherent strategy if you’re shooting for being pro-innovation, either — since it will demand AI developers consider a whole patchwork of sector-specific and cross-cutting legislation, drafted long before the latest AI boom. Developers may also find themselves subject to oversight by a number of different regulatory bodies (however weak sauce their attention might be, given the lack of resource and legal firepower to enforce the aforementioned principles). So, really, it looks like a recipe for uncertainty over which existing rules may apply to AI apps. (And, most probably, a patchwork of regulatory interpretations, depending on the sector, use case and oversight bodies involved, etc. Ergo, confusion and cost, not clarity.)

Even if existing U.K. regulators do quickly produce guidance on how they will approach AI — as some already are or are working to — there will still be plenty of gaps, as the Ada Lovelace Institute’s report also points out — since coverage gaps are a feature of the U.K.’s existing regulatory landscape. So the proposal to just further stretch this approach implies regulatory inconsistency getting baked in and even amplified as usage of AI scales/explodes across all sectors. 

Here’s the Institute again:

Large swathes of the UK economy are currently unregulated or only partially regulated. It is unclear who would be responsible for implementing AI principles in these contexts, which include: sensitive practices such as recruitment and employment, which are not comprehensively monitored by regulators, even within regulated sectors; public-sector services such as education and policing, which are monitored and enforced by an uneven network of regulators; activities carried out by central government departments, which are often not directly regulated, such as benefits administration or tax fraud detection; unregulated parts of the private sector, such as retail.

“AI is being deployed and used in every sector but the UK’s diffuse legal and regulatory network for AI currently has significant gaps. Clearer rights and new institutions are needed to ensure that safeguards extend across the economy,” it also suggests.

Another growing contradiction for the government’s claimed “AI leadership” position is that its bid for the country to become a global AI safety hub is being directly undermined by in-train efforts to water down domestic protections for people’s data — such as by lowering protections when they’re subject to automated decisions with a significant and/or legal impact — via the deregulatory Data Protection and Digital Information Bill (No. 2).

While the government has so far avoided the most head-banging Brexiteer suggestions for ripping up the EU-derived data protection rulebook — such as simply deleting the entirety of Article 22 (which deals with protection for automated decisions) from the U.K.’s General Data Protection Regulation — it is nonetheless forging ahead with a plan to reduce the level of protection citizens enjoy under current data protection law in various ways, despite its new ambition to make the U.K. a global AI safety hub.

“The UK GDPR — the legal framework for data protection currently in force in the UK — provides protections that are vital to protecting individuals and communities from potential AI harms. The Data Protection and Digital Information Bill (No. 2), tabled in its current form in March 2023, significantly amends these protections,” warns the Institute, pointing for example to the Bill removing a prohibition on many types of automated decisions — and instead requiring data controllers to have “safeguards in place, such as measures to enable an individual to contest the decision” — which it argues is a lower level of protection in practice.

“The reliance of the Government’s proposed framework on existing legislation and regulators makes it even more important that underlying regulation like data protection governs AI appropriately,” it goes on. “Legal advice commissioned by the Ada Lovelace Institute . . . suggests that existing automated processing safeguards may not in practice provide sufficient protection to people interacting with everyday services, like applying for a loan.”

“Taken collectively, the Bill’s changes risk further undermining the Government’s regulatory proposals for AI,” the report adds.

The Institute’s first recommendation is thus for government to rethink elements of the data protection reform bill that are “likely to undermine the safe development, deployment and use of AI, such as changes to the accountability framework.” It also recommends the government widen its review to look at existing rights and protections in U.K. law — with a view to plugging any other legislative gaps and introducing new rights and protections for people affected by AI-informed decisions where necessary.

Other recommendations in the report include introducing a statutory duty for regulators to have regard to the aforementioned principles, including “strict transparency and accountability obligations” and providing them with more funding/resources to tackle AI-related harms; exploring the introduction of a common set of powers for regulators, including an ex ante, developer-focused regulatory capability; and that the government should look at whether an AI ombudsperson should be established to support people aversely affected by AI.

The Institute also recommends the government clarify the law around AI and liability — which is another area where the EU is already streaks ahead.

On foundational model safety — an area that’s garnered particular interest and attention from the U.K. government of late, thanks to the viral buzz around generative AI tools like OpenAI’s ChatGPT — the Institute also believes the government needs to go further, recommending U.K.-based developers of foundational models should be given mandatory reporting requirements to make it easier for regulators to stay on top of a very fast-moving tech.

It even suggests that leading foundational model developers, such as OpenAI, Google DeepMind and Anthropic, should be required to provide government with notification when they (or any subprocessors they’re working with) begin large-scale training runs of new models.

“This would provide Government with an early warning of advancements in AI capabilities, allowing policymakers and regulators to prepare for the impact of these developments, rather than being caught unaware,” it suggests, adding that reporting requirements should also include information such as access to the data used to train models; results from in-house audits; and supply chain data.

Another suggestion is for the government to invest in small pilot projects to bolster its own understanding of trends in AI R&D.

Commenting on the report findings in a statement, Michael Birtwistle, associate director at the Ada Lovelace Institute, said:

The Government rightfully recognises that the UK has a unique opportunity to be a world-leader in AI regulation and the prime minister should be commended for his global leadership on this issue. However, the UK’s credibility on AI regulation rests on the Government’s ability to deliver a world-leading regulatory regime at home. Efforts towards international coordination are very welcome but they are not sufficient. The Government must strengthen its domestic proposals for regulation if it wants to be taken seriously on AI and achieve its global ambitions.

Sam Altman’s big European tour

More TechCrunch

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

17 hours ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

19 hours ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android