AI

Turncoat drone story shows why we should fear people, not AIs

Comment

Image Credits: Bryce Durbin

Update: The Air Force denies any such simulation took place, and the Colonel who related the story said that, although though the quote below seems unambiguous about training and retraining an AI using reinforcement learning, he “misspoke” and this was all in fact a “thought experiment.” Turns out this was a very different kind of lesson!

A story about a simulated drone turning on its operator in order to kill more efficiently is making the rounds so fast today that there’s no point in hoping it’ll burn itself out. Instead let’s take this as a teachable moment to really see why the “scary AI” threat is overplayed, and the “incompetent human” threat is clear and present.

The short version is this: Thanks to sci-fi and some careful PR plays by AI companies and experts, we are being told to worry about a theoretical future existential threat posed by a superintelligent AI. But as ethicists have pointed out, AI is already causing real harms, largely due to oversights and bad judgment by the people who create and deploy it. This story may sound like the former, but it’s definitely the latter.

So the story was reported by the Royal Aeronautical Society, which recently had a conference in London to talk about the future of air defense. You can read their all-in-one wrap-up of news and anecdotes from the event here.

There’s lots of other interesting chatter there I’m sure, much of it worthwhile, but it was this excerpt, attributed to U.S. Air Force Colonel Tucker “Cinco” Hamilton, that began spreading like wildfire:

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been “reinforced” in training that destruction of the SAM was the preferred option, the AI then decided that “no-go” decisions from the human were interfering with its higher mission — killing SAMs — and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system — ‘Hey don’t kill the operator — that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Horrifying, right? An AI so smart and bloodthirsty that its desire to kill overcame its desire to obey its masters. Skynet, here we come! Not so fast.

First of all, let’s be clear that this was all in simulation, something that was not obvious from the tweet making the rounds. This whole drama takes place in a simulated environment not out in the desert with live ammo and a rogue drone strafing the command tent. It was a software exercise in a research environment.

But as soon as I read this, I thought — wait, they’re training an attack drone with such a simple reinforcement method? I’m not a machine learning expert, though I have to play one for the purposes of this news outlet, and even I know that this approach was shown to be dangerously unreliable years ago.

Reinforcement learning is supposed to be like training a dog (or human) to do something like bite the bad guy. But what if you only ever show it bad guys and give it treats every time? What you’re actually doing is teaching the dog to bite every person it sees. Teaching an AI agent to maximize its score in a given environment can have similarly unpredictable effects.

Early experiments, maybe five or six years ago, when this field was just starting to blow up and compute was being made available to train and run this type of agent, ran into exactly this type of problem. It was thought that by defining positive and negative scoring and telling the AI to maximize its score, you would allow it the latitude to define its own strategies and behaviors that did so elegantly and unexpectedly.

That theory was right, in a way: elegant, unexpected methods of circumventing their poorly-thought-out schema and rules led to the agents doing things like scoring one point then hiding forever to avoid negative points, or glitching the game it was given run of so that its score arbitrarily increased. It seemed like this simplistic method of conditioning an AI was teaching it to do everything but do the desired task according to the rules.

This isn’t some obscure technical issue. AI rule-breaking in simulations is actually a fascinating and well-documented behavior that attracts research in its own right. OpenAI wrote a great paper showing the strange and hilarious ways agents “broke” a deliberately breakable environment in order to escape the tyranny of rules.

Clever hide-and-seek AIs learn to use tools and break the rules

So here we have a simulation being done by the Air Force, presumably pretty recently or they wouldn’t be talking about it at this year’s conference, that is obviously using this completely outdated method. I had thought this naive application of unstructured reinforcement — basically “score goes up if you do this thing and the rest doesn’t matter” — totally extinct because it was so unpredictable and weird. A great way to find out how an agent will break rules but a horrible way to make one follow them.

Yet they were testing it: a simulated drone AI with a scoring system so simple that it apparently didn’t get dinged for destroying its own team. Even if you wanted to base your simulation on this, the first thing you’d do is make “destroying your operator” negative a million points. That’s 101-level framing for a system like this one.

The reality is that this simulated drone did not turn on its simulated operator because it was so smart. And actually, it isn’t because it is dumb, either — there’s a certain cleverness to these rule-breaking AIs that maps to what we think of as lateral thinking. So it isn’t that.

The fault in this case is squarely on the people who created and deployed an AI system that they ought to have known was completely inadequate for the task. No one in the field of applied AI, or anything even adjacent to that like robotics, ethics, logic … no one would have signed off on such a simplistic metric for a task that eventually was meant to be performed outside the simulator.

Now, perhaps this anecdote is only partial and this was an early run that they were using to prove this point. Maybe the team warned this would happen and the brass said, do it anyway and shine up the report or we lose our funding. Still, it’s hard to imagine someone in the year 2023 even in the simplest simulation environment making this kind of mistake.

But we’re going to see these mistakes made in real-world circumstances — already have, no doubt. And the fault lies with the people who fail to understand the capabilities and limitations of AI, and subsequently make uninformed decisions that affect others. It’s the manager who thinks a robot can replace 10 line workers, the publisher who thinks it can write financial advice without an editor, the lawyer who thinks it can do his precedent research for him, the logistics company that thinks it can replace human delivery drivers.

Every time AI fails, it’s a failure of those who implemented it. Just like any other software. If someone told you the Air Force tested a drone running on Windows XP and it got hacked, would you worry about a wave of cybercrime sweeping the globe? No, you’d say “whose bright idea was that?

The future of AI is uncertain and that can be scary — already is scary for many who are already feeling its effects or, to be precise, the effects of decisions made by people who should know better.

Skynet may be coming for all we know. But if the research in this viral tweet is any indication, it’s a long, long way off and in the meantime any given tragedy can, as HAL memorably put it, only be attributable to human error.

More TechCrunch

Terri Burns, a former partner at GV, is venturing into a new chapter of her career by launching her own venture firm called Type Capital. 

GV’s youngest partner has launched her own firm

The decision to go monochrome was probably a smart one, considering the candy-colored alternatives that seem to want to dazzle and comfort you.

ChatGPT’s new face is a black hole

Apple and Google announced on Monday that iPhone and Android users will start seeing alerts when it’s possible that an unknown Bluetooth device is being used to track them. The…

Apple and Google agree on standard to alert people when unknown Bluetooth devices may be tracking them

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: Watch here

A human safety operator will be behind the wheel during this phase of testing, according to the company.

GM’s Cruise ramps up robotaxi testing in Phoenix

OpenAI announced a new flagship generative AI model on Monday that they call GPT-4o — the “o” stands for “omni,” referring to the model’s ability to handle text, speech, and…

OpenAI debuts GPT-4o ‘omni’ model now powering ChatGPT

Featured Article

The women in AI making a difference

As a part of a multi-part series, TechCrunch is highlighting women innovators — from academics to policymakers —in the field of AI.

4 hours ago
The women in AI making a difference

The expansion of Polar Semiconductor’s facility would enable the company to double its U.S. production capacity of sensor and power chips within two years.

White House proposes up to $120 million to help fund Polar Semiconductor’s chip facility expansion

In 2021, Google kicked off work on Project Starline, a corporate-focused teleconferencing platform that uses 3D imaging, cameras and a custom-designed screen to let people converse with someone as if…

Google’s 3D video conferencing platform, Project Starline, is coming in 2025 with help from HP

Over the weekend, Instagram announced it is expanding its creator marketplace to 10 new countries — this marketplace connects brands with creators to foster collaboration. The new regions include South…

Instagram expands its creator marketplace to 10 new countries

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

Four-year-old Mexican BNPL startup Aplazo facilitates fractionated payments to offline and online merchants even when the buyer doesn’t have a credit card.

Aplazo is using buy now, pay later as a stepping stone to financial ubiquity in Mexico

We received countless submissions to speak at this year’s Disrupt 2024. After carefully sifting through all the applications, we’ve narrowed it down to 19 session finalists. Now we need your…

Vote for your Disrupt 2024 Audience Choice favs

Co-founder and CEO Bowie Cheung, who previously worked at Uber Eats, said the company now has 200 customers.

Healthy growth helps B2B food e-commerce startup Pepper nab $30 million led by ICONIQ Growth

Booking.com has been designated a gatekeeper under the EU’s DMA, meaning the firm will be regulated under the bloc’s market fairness framework.

Booking.com latest to fall under EU market power rules

Featured Article

‘Got that boomer!’: How cybercriminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Estate is an invite-only website that has helped hundreds of attackers make thousands of phone calls aimed at stealing account passcodes, according to its leaked database.

8 hours ago
‘Got that boomer!’: How cybercriminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Squarespace is being taken private in an all-cash deal that values the company on an equity basis at $6.6 billion.

Permira is taking Squarespace private in a $6.9 billion deal

AI-powered tools like OpenAI’s Whisper have enabled many apps to make transcription an integral part of their feature set for personal note-taking, and the space has quickly flourished as a…

Buy Me a Coffee’s founder has built an AI-powered voice note app

Airtel, India’s second-largest telco, is partnering with Google Cloud to develop and deliver cloud and GenAI solutions to Indian businesses.

Google partners with Airtel to offer cloud and GenAI products to Indian businesses

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to…

Women in AI: Rep. Dar’shun Kendrick wants to pass more AI legislation

We took the pulse of emerging fund managers about what it’s been like for them during these post-ZERP, venture-capital-winter years.

A reckoning is coming for emerging venture funds, and that, VCs say, is a good thing

It’s been a busy weekend for union organizing efforts at U.S. Apple stores, with the union at one store voting to authorize a strike, while workers at another store voted…

Workers at a Maryland Apple store authorize strike

Alora Baby is not just aiming to manufacture baby cribs in an environmentally friendly way but is attempting to overhaul the whole lifecycle of a product

Alora Baby aims to push baby gear away from the ‘landfill economy’

Bumble founder and executive chair Whitney Wolfe Herd raised eyebrows this week with her comments about how AI might change the dating experience. During an onstage interview, Bloomberg’s Emily Chang…

Go on, let bots date other bots

Welcome to Week in Review: TechCrunch’s newsletter recapping the week’s biggest news. This week Apple unveiled new iPad models at its Let Loose event, including a new 13-inch display for…

Why Apple’s ‘Crush’ ad is so misguided

The U.K. AI Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and…

UK agency releases tools to test AI model safety

AI startup Runway’s second annual AI Film Festival showcased movies that incorporated AI tech in some fashion, from backgrounds to animations.

At the AI Film Festival, humanity triumphed over tech

Rachel Coldicutt is the founder of Careful Industries, which researches the social impact technology has on society.

Women in AI: Rachel Coldicutt researches how technology impacts society

SAP Chief Sustainability Officer Sophia Mendelsohn wants to incentivize companies to be green because it’s profitable, not just because it’s right.

SAP’s chief sustainability officer isn’t interested in getting your company to do the right thing