AI

Uber Eats courier’s fight against AI bias shows justice under UK law is hard won

Comment

Uber Eats bike courier
Image Credits: Jakub Porzycki/NurPhoto / Getty Images

On Tuesday, the BBC reported that Uber Eats courier Pa Edrissa Manjang, who is Black, had received a payout from Uber after “racially discriminatory” facial recognition checks prevented him from accessing the app, which he had been using since November 2019 to pick up jobs delivering food on Uber’s platform.

The news raises questions about how fit U.K. law is to deal with the rising use of AI systems. In particular, the lack of transparency around automated systems rushed to market, with a promise of boosting user safety and/or service efficiency, that may risk blitz-scaling individual harms, even as achieving redress for those affected by AI-driven bias can take years.

The lawsuit followed a number of complaints about failed facial recognition checks since Uber implemented the Real Time ID Check system in the U.K. in April 2020. Uber’s facial recognition system — based on Microsoft’s facial recognition technology — requires the account holder to submit a live selfie checked against a photo of them held on file to verify their identity.

Failed ID checks

Per Manjang’s complaint, Uber suspended and then terminated his account following a failed ID check and subsequent automated process, claiming to find “continued mismatches” in the photos of his face he had taken for the purpose of accessing the platform. Manjang filed legal claims against Uber in October 2021, supported by the Equality and Human Rights Commission (EHRC) and the App Drivers & Couriers Union (ADCU).

Years of litigation followed, with Uber failing to have Manjang’s claim struck out or a deposit ordered for continuing with the case. The tactic appears to have contributed to stringing out the litigation, with the EHRC describing the case as still in “preliminary stages” in fall 2023, and noting that the case shows “the complexity of a claim dealing with AI technology”. A final hearing had been scheduled for 17 days in November 2024.

That hearing won’t take place after Uber offered — and Manjang accepted — a payment to settle, meaning fuller details of what exactly went wrong and why won’t be made public. Terms of the financial settlement have not been disclosed, either. Uber did not provide details when we asked, nor did it offer comment on exactly what went wrong.

We also contacted Microsoft for a response to the case outcome, but the company declined comment.

Despite settling with Manjang, Uber is not publicly accepting that its systems or processes were at fault. Its statement about the settlement denies courier accounts can be terminated as a result of AI assessments alone, as it claims facial recognition checks are back-stopped with “robust human review.”

“Our Real Time ID check is designed to help keep everyone who uses our app safe, and includes robust human review to make sure that we’re not making decisions about someone’s livelihood in a vacuum, without oversight,” the company said in a statement. “Automated facial verification was not the reason for Mr Manjang’s temporary loss of access to his courier account.”

Clearly, though, something went very wrong with Uber’s ID checks in Manjang’s case.

Pa Edrissa Manjang
Pa Edrissa Manjang (Photo: Courtesy of ADCU)

Worker Info Exchange (WIE), a platform workers’ digital rights advocacy organization which also supported Manjang’s complaint, managed to obtain all his selfies from Uber, via a Subject Access Request under U.K. data protection law, and was able to show that all the photos he had submitted to its facial recognition check were indeed photos of himself.

“Following his dismissal, Pa sent numerous messages to Uber to rectify the problem, specifically asking for a human to review his submissions. Each time Pa was told ‘we were not able to confirm that the provided photos were actually of you and because of continued mismatches, we have made the final decision on ending our partnership with you’,” WIE recounts in discussion of his case in a wider report looking at “data-driven exploitation in the gig economy”.

Based on details of Manjang’s complaint that have been made public, it looks clear that both Uber’s facial recognition checks and the system of human review it had set up as a claimed safety net for automated decisions failed in this case.

Equality law plus data protection

The case calls into question how fit for purpose U.K. law is when it comes to governing the use of AI.

Manjang was finally able to get a settlement from Uber via a legal process based on equality law — specifically, a discrimination claim under the U.K.’s Equality Act 2006, which lists race as a protected characteristic.

Baroness Kishwer Falkner, chairwoman of the EHRC, was critical of the fact the Uber Eats courier had to bring a legal claim “in order to understand the opaque processes that affected his work,” she wrote in a statement.

“AI is complex, and presents unique challenges for employers, lawyers and regulators. It is important to understand that as AI usage increases, the technology can lead to discrimination and human rights abuses,” she wrote. “We are particularly concerned that Mr Manjang was not made aware that his account was in the process of deactivation, nor provided any clear and effective route to challenge the technology. More needs to be done to ensure employers are transparent and open with their workforces about when and how they use AI.”

U.K. data protection law is the other relevant piece of legislation here. On paper, it should be providing powerful protections against opaque AI processes.

The selfie data relevant to Manjang’s claim was obtained using data access rights contained in the U.K. GDPR. If he had not been able to obtain such clear evidence that Uber’s ID checks had failed, the company might not have opted to settle at all. Proving a proprietary system is flawed without letting individuals access relevant personal data would further stack the odds in favor of the much richer resourced platforms.

Enforcement gaps

Beyond data access rights, powers in the U.K. GDPR are supposed to provide individuals with additional safeguards, including against automated decisions with a legal or similarly significant effect. The law also demands a lawful basis for processing personal data, and encourages system deployers to be proactive in assessing potential harms by conducting a data protection impact assessment. That should force further checks against harmful AI systems.

However, enforcement is needed for these protections to have effect — including a deterrent effect against the rollout of biased AIs.

In the U.K.’s case, the relevant enforcer, the Information Commissioner’s Office (ICO), failed to step in and investigate complaints against Uber, despite complaints about its misfiring ID checks dating back to 2021.

Jon Baines, a senior data protection specialist at the law firm Mishcon de Reya, suggests “a lack of proper enforcement” by the ICO has undermined legal protections for individuals.

“We shouldn’t assume that existing legal and regulatory frameworks are incapable of dealing with some of the potential harms from AI systems,” he tells TechCrunch. “In this example, it strikes me…that the Information Commissioner would certainly have jurisdiction to consider both in the individual case, but also more broadly, whether the processing being undertaken was lawful under the U.K. GDPR.

“Things like — is the processing fair? Is there a lawful basis? Is there an Article 9 condition (given that special categories of personal data are being processed)? But also, and crucially, was there a solid Data Protection Impact Assessment prior to the implementation of the verification app?”

“So, yes, the ICO should absolutely be more proactive,” he adds, querying the lack of intervention by the regulator.

We contacted the ICO about Manjang’s case, asking it to confirm whether or not it’s looking into Uber’s use of AI for ID checks in light of complaints. A spokesperson for the watchdog did not directly respond to our questions but sent a general statement emphasizing the need for organizations to “know how to use biometric technology in a way that doesn’t interfere with people’s rights”.

“Our latest biometric guidance is clear that organisations must mitigate risks that come with using biometric data, such as errors identifying people accurately and bias within the system,” its statement also said, adding: “If anyone has concerns about how their data has been handled, they can report these concerns to the ICO.”

Meanwhile, the government is in the process of diluting data protection law via a post-Brexit data reform bill.

In addition, the government also confirmed earlier this year it will not introduce dedicated AI safety legislation at this time, despite Prime Minister Rishi Sunak making eye-catching claims about AI safety being a priority area for his administration.

Instead, it affirmed a proposal — set out in its March 2023 whitepaper on AI — in which it intends to rely on existing laws and regulatory bodies extending oversight activity to cover AI risks that might arise on their patch. One tweak to the approach it announced in February was a tiny amount of extra funding (£10 million) for regulators, which the government suggested could be used to research AI risks and develop tools to help them examine AI systems.

No timeline was provided for disbursing this small pot of extra funds. Multiple regulators are in the frame here, so if there’s an equal split of cash between bodies such as the ICO, the EHRC and the Medicines and Healthcare products Regulatory Agency, to name just three of the 13 regulators and departments the U.K. secretary of state wrote to last month asking them to publish an update on their “strategic approach to AI”, they could each receive less than £1 million to top up budgets to tackle fast-scaling AI risks.

Frankly, it looks like an incredibly low level of additional resource for already overstretched regulators if AI safety is actually a government priority. It also means there’s still zero cash or active oversight for AI harms that fall between the cracks of the U.K.’s existing regulatory patchwork, as critics of the government’s approach have pointed out before.

A new AI safety law might send a stronger signal of priority — akin to the EU’s risk-based AI harms framework that’s speeding toward being adopted as hard law by the bloc. But there would also need to be a will to actually enforce it. And that signal must come from the top.

Uber under pressure over facial recognition checks for drivers

UK to avoid fixed rules for AI – in favor of ‘context-specific guidance’

More TechCrunch

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android

A hacker listed the data allegedly breached from Samco on a known cybercrime forum.

Hacker claims theft of India’s Samco account data

A top European privacy watchdog is investigating following the recent breaches of Dell customers’ personal information, TechCrunch has learned.  Ireland’s Data Protection Commission (DPC) deputy commissioner Graham Doyle confirmed to…

Ireland privacy watchdog confirms Dell data breach investigation

Ampere and Qualcomm aren’t the most obvious of partners. Both, after all, offer Arm-based chips for running data center servers (though Qualcomm’s largest market remains mobile). But as the two…

Ampere teams up with Qualcomm to launch an Arm-based AI server

At Google’s I/O developer conference, the company made its case to developers — and to some extent, consumers — why its bets on AI are ahead of rivals. At the…

Google I/O was an AI evolution, not a revolution

TechCrunch Disrupt has always been the ultimate convergence point for all things startup and tech. In the bustling world of innovation, it serves as the “big top” tent, where entrepreneurs,…

Meet the Magnificent Six: A tour of the stages at Disrupt 2024

There’s apparently a lot of demand for an on-demand handyperson. Khosla Ventures and Pear VC have just tripled down on their investment in Honey Homes, which offers up a dedicated…

Khosla Ventures, Pear VC triple down on Honey Homes, a smart way to hire a handyman

TikTok is testing the ability for users to upload 60-minute videos, the company confirmed to TechCrunch on Thursday. The feature is available to a limited group of users in select…

TikTok tests 60-minute video uploads as it continues to take on YouTube

Flock Safety is a multibillion-dollar startup that’s got eyes everywhere. As of Wednesday, with the company’s new Solar Condor cameras, those eyes are solar-powered and use wireless 5G networks to…

Flock Safety’s solar-powered cameras could make surveillance more widespread

Since he was very young, Bar Mor knew that he would inevitably do something with real estate. His family was involved in all types of real estate projects, from ground-up…

Agora raises $34M Series B to keep building the Carta for real estate

Poshmark, the social commerce site that lets people buy and sell new and used items to each other, launched a paid marketing tool on Thursday, giving sellers the ability to…

Poshmark’s ‘Promoted Closet’ tool lets sellers boost all their listings at once

Google is launching a Gemini add-on for educational institutes through Google Workspace.

Google adds Gemini to its Education suite

More money for the generative AI boom: Y Combinator-backed developer infrastructure startup Recall.ai announced Thursday it has raised a $10 million Series A funding round, bringing its total raised to over…

YC-backed Recall.ai gets $10M Series A to help companies use virtual meeting data

Engineers Adam Keating and Jeremy Andrews were tired of using spreadsheets and screenshots to collab with teammates — so they launched a startup, CoLab, to build a better way. The…

CoLab’s collaborative tools for engineers line up $21M in new funding

Reddit announced on Wednesday that it is reintroducing its awards system after shutting down the program last year. The company said that most of the mechanisms related to awards will…

Reddit reintroduces its awards system

Sigma Computing, a startup building a range of data analytics and business intelligence tools, has raised $200 million in a fresh VC round.

Sigma is building a suite of collaborative data analytics tools