AI

UK public sector failing to be open about its use of AI, review finds

Comment

Image Credits: Roy Scott / Getty Images

A report into the use of artificial intelligence by the U.K.’s public sector has warned that the government is failing to be open about automated decision-making technologies which have the potential to significantly impact citizens’ lives.

Ministers have been especially bullish on injecting new technologies into the delivery of taxpayer-funded healthcare — with health minister Matt Hancock setting out a tech-fueled vision of “preventative, predictive and personalised care” in 2018, calling for a root and branch digital transformation of the National Health Service (NHS) to support piping patient data to a new generation of “healthtech” apps and services.

He has also personally championed a chatbot startup, Babylon Health, that’s using AI for healthcare triage — and which is now selling a service in to the NHS.

Policing is another area where AI is being accelerated into U.K. public service delivery, with a number of police forces trialing facial recognition technology — and London’s Met Police switching over to a live deployment of the AI technology just last month.

However the rush by cash-strapped public services to tap AI “efficiencies” risks glossing over a range of ethical concerns about the design and implementation of such automated systems, from fears about embedding bias and discrimination into service delivery and scaling harmful outcomes to questions of consent around access to the data sets being used to build AI models and human agency over automated outcomes, to name a few of the associated concerns — all of which require transparency into AIs if there’s to be accountability over automated outcomes.

The role of commercial companies in providing AI services to the public sector also raises additional ethical and legal questions.

Only last week, a court in the Netherlands highlighted the risks for governments of rushing to bake AI into legislation after it ruled an algorithmic risk-scoring system implemented by the Dutch government to assess the likelihood that social security claimants will commit benefits or tax fraud breached their human rights.

The court objected to a lack of transparency about how the system functions, as well as the associated lack of controlability — ordering an immediate halt to its use.

The U.K. parliamentary committee that reviews standards in public life has today sounded a similar warning — publishing a series of recommendations for public-sector use of AI and warning that the technology challenges three key principles of service delivery: openness, accountability and objectivity.

“Under the principle of openness, a current lack of information about government use of AI risks undermining transparency,” it writes in an executive summary.

“Under the principle of accountability, there are three risks: AI may obscure the chain of organisational accountability; undermine the attribution of responsibility for key decisions made by public officials; and inhibit public officials from providing meaningful explanations for decisions reached by AI. Under the principle of objectivity, the prevalence of data bias risks embedding and amplifying discrimination in everyday public sector practice.”

“This review found that the government is failing on openness,” it goes on, asserting that: “Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”

In 2018, the UN’s special rapporteur on extreme poverty and human rights raised concerns about the U.K.’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale — warning then that the impact of a digital welfare state on vulnerable people would be “immense,” and calling for stronger laws and enforcement of a rights-based legal framework to ensure the use of technologies like AI for public service provision does not end up harming people.

Per the committee’s assessment, it is “too early to judge if public sector bodies are successfully upholding accountability.”

Parliamentarians also suggest that “fears over ‘black box’ AI… may be overstated” — and rather dub “explainable AI” a “realistic goal for the public sector.”

On objectivity, they write that data bias is “an issue of serious concern, and further work is needed on measuring and mitigating the impact of bias.”

The use of AI in the U.K. public sector remains limited at this stage, according to the committee’s review, with healthcare and policing currently having the most developed AI programmes — where the tech is being used to identify eye disease and predict reoffending rates, for example.

“Most examples the Committee saw of AI in the public sector were still under development or at a proof-of-concept stage,” the committee writes, further noting that the Judiciary, the Department for Transport and the Home Office are “examining how AI can increase efficiency in service delivery.”

It also heard evidence that local government is working on incorporating AI systems in areas such as education, welfare and social care — noting the example of Hampshire County Council trialing the use of Amazon Echo smart speakers in the homes of adults receiving social care as a tool to bridge the gap between visits from professional carers, and points to a Guardian article which reported that one-third of U.K. councils use algorithmic systems to make welfare decisions.

But the committee suggests there are still “significant” obstacles to what they describe as “widespread and successful” adoption of AI systems by the U.K. public sector.

“Public policy experts frequently told this review that access to the right quantity of clean, good-quality data is limited, and that trial systems are not yet ready to be put into operation,” it writes. “It is our impression that many public bodies are still focusing on early-stage digitalisation of services, rather than more ambitious AI projects.”

The report also suggests that the lack of a clear standards framework means many organisations may not feel confident in deploying AI yet.

“While standards and regulation are often seen as barriers to innovation, the Committee believes that implementing clear ethical standards around AI may accelerate rather than delay adoption, by building trust in new technologies among public officials and service users,” it suggests.

Among 15 recommendations set out in the report is a call for a clear legal basis to be articulated for the use of AI by the public sector. “All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery,” the committee writes.

Another recommendation is for clarity over which ethical principles and guidance applies to public sector use of AI — with the committee noting there are three sets of principles that could apply to the public sector, which is generating confusion.

“The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use,” it recommends.

It also wants the Equality and Human Rights Commission to develop guidance on data bias and anti-discrimination to ensure public sector bodies’ use of AI complies with the U.K. Equality Act 2010.

The committee is not recommending a new regulator should be created to oversee AI — but does call on existing oversight bodies to act swiftly to keep up with the pace of change being driven by automation.

It also advocates for a regulatory assurance body to identify gaps in the regulatory landscape and provide advice to individual regulators and government on the issues associated with AI — supporting the government’s intention for the Centre for Data Ethics and Innovation (CDEI), which was announced in 2017, to perform this role. (A recent report by the CDEI recommended tighter controls on how platform giants can use ad targeting and content personalisation.)

Another recommendation is around procurement, with the committee urging the government to use its purchasing power to set requirements that “ensure that private companies developing AI solutions for the public sector appropriately address public standards.”

“This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements,” it suggests.

Responding to the report in a statement, shadow digital minister Chi Onwurah MP accused the government of “driving blind, with no control over who is in the AI driving seat.”

“This serious report sadly confirms what we know to be the case — that the Conservative Government is failing on openness and transparency when it comes to the use of AI in the public sector,” she said. “The Government is driving blind, with no control over who is in the AI driving seat. The Government urgently needs to get a grip before the potential for unintended consequences gets out of control.

“Last year, I argued in parliament that Government should not accept further AI algorithms in decision making processes without introducing further regulation. I will continue to push the Government to go further in sharing information on how AI is currently being used at all level of Government. As this report shows, there is an urgent need for practical guidance and enforceable regulation that works. It’s time for action.”

More TechCrunch

The person who claims to have 49 million Dell customer records — Menelik — told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses,…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others

WhatsApp is updating its mobile apps for a fresh and more streamlined look, while also introducing a new “darker dark mode,” the company announced on Thursday. The messaging app says…

WhatsApp’s latest update streamlines navigation and adds a ‘darker dark mode’

Plinky lets you solve the problem of saving and organizing links from anywhere with a focus on simplicity and customization.

Plinky is an app for you to collect and organize links easily

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

For cancer patients, medicines administered in clinical trials can help save or extend lives. But despite thousands of trials in the United States each year, only 3% to 5% of…

Triomics raises $15M Series A to automate cancer clinical trials matching

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Tap, tap.…

Tesla drives Luminar lidar sales and Motional pauses robotaxi plans

The newly announced “Public Content Policy” will now join Reddit’s existing privacy policy and content policy to guide how Reddit’s data is being accessed and used by commercial entities and…

Reddit locks down its public data in new content policy, says use now requires a contract

Eva Ho plans to step away from her position as general partner at Fika Ventures, the Los Angeles-based seed firm she co-founded in 2016. Fika told LPs of Ho’s intention…

Fika Ventures co-founder Eva Ho will step back from the firm after its current fund is deployed

In a post on Werner Vogels’ personal blog, he details Distill, an open-source app he built to transcribe and summarize conference calls.

Amazon’s CTO built a meeting-summarizing app for some reason

Paris-based Mistral AI, a startup working on open source large language models — the building block for generative AI services — has been raising money at a $6 billion valuation,…

Sources: Mistral AI raising at a $6B valuation, SoftBank ‘not in’ but DST is

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect

Dating apps and other social friend-finders are being put on notice: Dating app giant Bumble is looking to make more acquisitions.

Bumble says it’s looking to M&A to drive growth

When Class founder Michael Chasen was in college, he and a buddy came up with the idea for Blackboard, an online classroom organizational tool. His original company was acquired for…

Blackboard founder transforms Zoom add-on designed for teachers into business tool

Groww, an Indian investment app, has become one of the first startups from the country to shift its domicile back home.

Groww joins the first wave of Indian startups moving domiciles back home from US

Technology giant Dell notified customers on Thursday that it experienced a data breach involving customers’ names and physical addresses. In an email seen by TechCrunch and shared by several people…

Dell discloses data breach of customers’ physical addresses

Featured Article

Fairgen ‘boosts’ survey results using synthetic data and AI-generated responses

The Israeli startup has raised $5.5M for its platform that uses “statistical AI” to generate synthetic data that it says is as good as the real thing.

1 day ago
Fairgen ‘boosts’ survey results using synthetic data and AI-generated responses

Hydrow, the at-home rowing machine maker, announced Thursday that it has acquired a majority stake in Speede Fitness, the company behind the AI-enabled strength training machine. The rowing startup also…

Rowing startup Hydrow acquires a majority stake in Speede Fitness as their CEO steps down

Call centers are embracing automation. There’s debate as to whether that’s a good thing, but it’s happening — and quite possibly accelerating. According to research firm TechSci Research, the global…

Retell AI lets companies build ‘voice agents’ to answer phone calls

TikTok is starting to automatically label AI-generated content that was made on other platforms, the company announced on Thursday. With this change, if a creator posts content on TikTok that…

TikTok will automatically label AI-generated content created on platforms like DALL·E 3