AI

UK to avoid fixed rules for AI – in favor of ‘context-specific guidance’

Comment

illustration of runner thru binary code
Image Credits: Getty Images

The U.K. isn’t going to be setting hard rules for AI any time soon.

Today, the Department for Science, Innovation and Technology (DSIT) published a white paper setting out the government’s preference for a light-touch approach to regulating artificial intelligence. It’s kicking off a public consultation process — seeking feedback on its plans up to June 21 — but appears set on paving a smooth road of ‘flexible principles’ that AI can speed through.

Worries about the risks of increasingly powerful AI technologies are very much treated as a secondary consideration, relegated far behind a political agenda to talk up the vast potential of high tech growth — and thus, if problems arise, the government is suggesting the U.K.’s existing (overstretched) regulators will have to deal with them, on a case-by-case basis, armed only with existing powers (and resources). So, er, lol!

The 91-page white paper, which is entitled “A pro-innovation approach to AI regulation”, talks about taking “a common-sense, outcomes-oriented approach” to regulating automation — by applying what the government frames as a “proportionate and pro-innovation regulatory framework”.

In a press release accompanying the white paper’s publication — with a clear eye on generating newspaper headlines that frame a narrative of ministers seeking to “turbocharge growth” — the government confirms there will be no dedicated watchdog for artificial intelligence, merely a set of “principles” for existing regulators to work with; so no new legislation, rather a claim of “adaptable” (but not legally binding) regulation.

DSIT says legislation “could” be introduced — at some unspecified future period, and when parliamentary time allows — “to ensure regulators consider the principles consistently”. So, yep, that’s the sound of a can being kicked down the road. But expect to see guidance emerging from a number of existing U.K. regulators over the next 12 months — along with some tools and “risk assessment templates” which AI makers may be encouraged to play around with (if they like).

There will also be the inexorable sandbox (funded with £2M from the public purse) — or at least a “sandbox trial to help businesses test AI rules before getting to market”, per DSIT. But evidently there won’t be a hard legal requirement to actually use it.

The government says its approach to AI will focus on “regulating the use, not the technology” — ergo, there won’t be any rules or risk levels assigned to entire sectors or technologies. Which is quite the contrast with the European Union’s direction of travel with its risk-based framework that includes some up-front prohibitions on certain users of AI, with define regimes for use-cases specified as high risk and self regulation for lower risk uses.

“Instead, we will regulate based on the outcomes AI is likely to generate in particular applications,” the government stipulates, arguing — for example, and somewhat boldly in its choice of example here — that classifying all applications of AI in critical infrastructure as high risk “would not be proportionate or effective” because there might be some uses of AI in critical infrastructure that can be “relatively low risk”.

Because ministers have opted for what the white paper calls “context-specificity”, they decided against setting up a dedicated regulator for AI — hence the responsibility falls on existing bodies with expertise across various sectors.

“To best achieve this context-specificity we will empower existing UK regulators to apply the cross-cutting principles,” it writes on this. “Regulators are best placed to conduct detailed risk analysis and enforcement activities within their areas of expertise. Creating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators.”

Under the plan, existing regulators will be expected to apply a set of five principles — setting out “key elements of responsible AI design, development and use” — that the government wants/hopes to guide businesses as they develop artificial intelligence.

“Regulators will lead the implementation of the framework, for example by issuing guidance on best practice for adherence to these principles,” it suggests, adding that they will be expected to apply the principles “proportionately” to address the risks posed by AI “within their remits, in accordance with existing laws and regulations” — arguing this will enable the principles to “complement existing regulation, increase clarity, and reduce friction for businesses operating across regulatory remits”.

It says it expects relevant regulators to need to issue “practical guidance” on the principles or update existing guidance — in order to “provide clarity to business” in what may otherwise be a vacuum of ongoing legal uncertainty. It also suggests regulators may need to publish joint guidance focused on AI use cases that cross multiple regulatory remits. So more work and more joint working is coming down the pipe for UK oversight bodies.

“Regulators may also use alternative measures and introduce other tools or resources, in addition to issuing guidance, within their existing remits and powers to implement the principles,” it goes on, adding that it will “monitor the overall effectiveness of the principles and the wider impact of the framework” — stipulating that: “This will include working with regulators to understand how the principles are being applied and whether the framework is adequately supporting innovation.”

So it’s seemingly leaving the door open to rowing back on certain principles if they’re considered too arduous by business.

‘Flexible principles’

“We recognise that particular AI technologies, foundation models for example, can be applied in many different ways and this means the risks can vary hugely. For example, using a chatbot to produce a summary of a long article presents very different risks to using the same technology to provide medical advice. We understand the need to monitor these developments in partnership with innovators while also avoiding placing unnecessary regulatory burdens on those deploying AI,” writes Michelle Donelan, the secretary of state for science, innovation and technology in the white paper’s executive summary where the government sets out its “pro-innovation” stall.

“To ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI. This will mean supporting innovation and working closely with business, but also stepping in to address risks when necessary. By underpinning the framework with a set of principles, we will drive consistency across regulators while also providing them with the flexibility needed.”

The existing regulatory bodies the government is intending to saddle with more tasks — drafting “tailored, context-specific approaches” which AI model makers can also only take on advisement (i.e. ignore) — include the Health and Safety Executive; the Equality and Human Rights Commission; and the Competition and Markets Authority (CMA), per DSIT.

The PR doesn’t mention the Information Commissioner’s Office (ICO), aka the data protection regulator, but it gets several references in the white paper and looks set to be another body pressganged into producing AI guidance (usefully, enough, the ICO has already offered some thoughts on AI snake oil). 

One quick aside here: The CMA is still waiting for the government to empower a dedicated Digital Markets Unit (DMU) that was supposed to be reining in the market power of Big Tech, i.e. by passing the necessary legislation. But, last year, ministers opted to kick that can into the long grass — so the DMU has still not been put on a statutory footing almost two years after it soft launched in expectation of parliamentary time being found to empower it… So it’s becoming abundantly clear this government is a lot more fond of drafting press releases than smart digital regulation.

The upshot is the U.K. has been left trailing the whole of the EU on the salient area of digital competition (the bloc has the Digital Markets Act coming in application in a few months) — while Germany updated its national competition regime with an ex ante digital regime at the start of 2021 and has a bunch of pro-competition enforcements under its belt already.

Now — by design — U.K. ministers intend the country to trail peers on AI regulation, too; framing this as a choice to “avoid heavy-handed legislation which could stifle innovation”, as DSIT puts it, in favor of a mass of sectoral regulatory guidance that businesses can choose whether to follow — literally in the same breath as penning the line that: “Currently, organisations can be held back from using AI to its full potential because a patchwork of legal regimes causes confusion and financial and administrative burdens for businesses trying to comply with rules.” So, um… legal certainty good or bad — which is it?!

In short this looks like a very British (post-Brexit) mess.

Across the English Channel, meanwhile, EU lawmakers are in the latter stages of negotiations over setting a risk-based framework for regulating AI — a draft law the European Commission presented way back in 2021; now with MEPs pushing for amendments to ensure the final text covers general purpose AIs like OpenAI’s ChatGPT. The EU also has a proposal for updating the bloc’s liability rules for software and AI on the table too.

In the face of the EU’s carefully structured risk-baed framework, U.K. lawmakers are left trumpeting voluntary risk assessment templates and a toy sandbox — and calling this ‘DIY’ approach to generating trustworthy AI a ‘Brexit bonus’. Ouch.

The five principles the government wants to guide the use of AI — or, specifically, that existing regulators “should consider to best facilitate the safe and innovative use of AI in the industries they monitor” — are:

  • safety, security and robustness: “Applications of AI should function in a secure, safe and robust way where risks are carefully managed”
  • transparency and explainability: “Organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI”
  • fairness: “AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes”
  • accountability and governance: “Measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes”
  • contestability and redress: “People need to have clear routes to dispute harmful outcomes or decisions generated by AI”

All of which sound like fine words indeed. But without a legal framework to turn “principles” into hard rules — and ensure consistent application and enforcement atop entities that choose not to bother with any of that expensive safety stuff — it looks about as useful as whistling the Lord’s Prayer and hoping for the best if it’s trustworthy AI you’re looking for…

(Oh yes — and don’t forget the U.K. government is also in the process of watering down the aforementioned U.K. GDPR — after it recently invited businesses to “co-design” a new data protection framework. Which led to a revised reform emerging that aims to make it easier for commercial entities to process people’s data for use-cases like research, and which risks eroding the independence of the privacy watchdog by adding a politically appointed board, in order to (and I quote Donelan here) ensure “we are the most innovative economy in the world and that we cement ourselves as a Science and Technology Superpower”.)

The clear trend in the U.K. is of existing protections being rowed back as the government seeks to roll out the red carpet for AI-fuelled “innovation”, without a thought for what that might mean for rather essential stuff like safety or fairness — and therefore trustworthiness, assuming you want people to have a sliver of trust in the AIs you’re pumping out — but ministers are essentially saying: ‘Don’t worry, just lie back and think of GB’s GDP!’

Of course any developers building AI models in the U.K. and wanting to scale beyond those shores will have to consider regulations that apply outside the U.K. So the freedom to be so lightly regulated may, ultimately, come with a hard requirement to comply with foreign frameworks anyway — or else be tightly limited in geographical scope. (And, well, tech innovators do love to scale.)

Still, DSIT’s PR has a canned quote from Lila Ibrahim, COO (and U.K. AI Council Member) at Google-owned DeepMind — an AI giant that has been lagging behind rivals like OpenAI on the buzzy artificial intelligence tech of the moment (generative AI) — who lauds the government’s proposed “context-driven approach”, rubberstamping the direction of travel with the claim that it will “help regulation keep pace with the development of AI, support innovation and mitigate future risks”.

“AI has the potential to advance science and benefit humanity in numerous ways, from combating climate change to better understanding and treating diseases. This transformative technology can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly,” Ibrahim also suggests.

The government is clearly hoping its offer of ‘no rules except the ones you choose’ will encourage AI startups to pick the U.K. over other locations — where automation is being more tightly regulated. (Or as Donelan pens it: “Our pro-innovation approach will also act as a strong incentive when it comes to AI businesses based overseas establishing a presence in the UK.”)

It’s quite the gamble by the Conservative government — given the highly scalable potential for too lightly regulated AI to go horribly wrong. And headlines about ‘AI-powered critical infrastructure that failed’ won’t wait for government press releases; they’ll write themself. But Rishi Sunak’s Tories are apparently going all in on this one.

In a press statement, Donelan goes on to offer this interesting construction — to explain what the government is doing:

AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.

Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.

It’s quite the collection of sentences — combining an assertion of the need to have “rules to make sure [AI] is developed safely”, with a sleight-of-hand segue to the offer of (just) “strong principles so that people can trust businesses to unleash this technology of tomorrow”. So not rules then. (And, I mean, ‘trusting businesses to unleash technology’ is not at all the same thing as trusting businesses to unleash trusted technology. In fact you could say they’re poles apart. So the choice of phase there is quite the tell.)

But back to rules vs principles. We asked ChatGPT what the difference is between these two terms. And here’s what it told us:

Rules and principles are both guidelines that govern behavior, but they differ in their nature and application.

Rules are specific instructions or directives that dictate what one should or should not do. They are usually created by authorities or institutions and enforced through consequences such as punishments or rewards. Rules are often rigid and inflexible and do not take into account situational differences or individual circumstances. For example, “Do not exceed the speed limit on the highway” is a rule that applies to all drivers regardless of the context.

Principles, on the other hand, are broad concepts or values that guide behavior and decision-making. They are more flexible and adaptable than rules and provide a framework for making ethical or moral judgments. Principles are usually not enforced through punishments or rewards but are instead internalized and followed voluntarily. For example, the principle of honesty is a value that guides behavior in a variety of situations, such as being truthful in communication, respecting others’ property, and fulfilling obligations.

In summary, rules are specific and inflexible instructions while principles are broad and flexible values that guide behavior. Rules are enforced through external means while principles are internalized and followed voluntarily.

So, assuming this large language model is not simply hallucinating again, and the nuance it’s identifying is correct, Donelan is both recognizing that fixed rules are required for AI to be safe while confirming the government has decided against setting any right now. The verbal downgrade is to purely voluntary principles. Or, basically, it’s going to let businesses make up their own minds and do what they must in order to grow as fast as possible for the foreseeable future (or at least until after the next election). What could possibly go wrong!?

It’s clear the government’s growth-at-all costs agenda has eaten a full course meal of AI hype. Pity the poor Brits set to become guinea pigs in the name of unleashing mindless automation atop a rudderless bark christened “innovation”.

Citizens of the U.K. will want to strap themselves in for this ride. Because if something does go wrong they’ll be forced to wait for the government to make parliamentary time available to actually pass some safety rules. Which may be a lot of breath to hold.

UK takes another bite at post-Brexit data protection reform — with ‘new GDPR’

More TechCrunch

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

16 hours ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

18 hours ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android