Stop Fearing Artificial Intelligence

Editor’s note: Tim Oates is chief scientist at CircleBack. He holds a PhD in computer science with an emphasis in machine learning from UMass Amherst and is also Oros Family Professor of Computer Science and Electrical Engineering at University of Maryland Baltimore County.

As yet another tech pioneer with no connection to artificial intelligence steps out to voice his fears about AI being catastrophic for the human race, I feel the need respond. While I respect Steve Wozniak’s technological contributions to our culture, I fear that he, like so many others (Musk, Hawking, Gates), is poisoning the well for fear of something he doesn’t truly understand.

Conflating facts of technology’s rapid progress with a Hollywood understanding of intelligent machines is provocative (honestly, it’s a favorite in my most-loved science fiction books and movies), but this technology doesn’t live in a Hollywood movie, it isn’t HAL or Skynet, and it deserves a grounded, rational look.

For the sake of argument, let’s assume that we have (or can plausibly) create a superhuman AI. Such an AI could, like us, think all kinds of things — “the humans created me and they’re really interesting” or “the humans’ bodily functions are mildly annoying” or “all humans must die!” — all of which are equally speculatively plausible. So why anyone gives the doomsday scenario any more weight than the others is a bit of a mystery to me.

It may be that, in a world filled with pop culture stories and polluted by a fear of tech, the doomsday story is the most entertaining, taking its spot next to UFO-created crop circles and the like. But the assumptions that this story-presented-as-an-idea rests on are unfounded and highly improbable.

Here’s what you’re supposed to believe about true AI:

  • It has an “I,” a sense of self distinct from others.
  • It has the intellectual capacity to step outside of the boundaries of its intended purpose and programming to form radically new goals for itself (the “I”).
  • It chooses a plan to achieve those goals from a possibly enormous set of effective plans involving lots of death and destruction.
  • It has access to resources on a global scale to carry out the plan.

Sound reasonable to you? Me either.

But for clarity’s sake, let’s unpack these assumptions, starting with the notion that AI has a distinct “I” capable of stepping outside its intended programming. Even the quickest glance over the history of AI confirms there’s a tradeoff between machine intelligence and adaptability.

Would this superhuman intelligence inherently go nuclear, or would it likely just slack off a little at work or, in extreme cases, compose rap music in Latin?

 

Narrowly intelligent machines like Deep Blue and Watson can play chess or answer Jeopardy questions better than anyone alive while not being able to understand checkers or the Trivia Crack app. More general-intelligence machines, on the other hand, can “learn” to do many things but will ultimately do them all poorly.

For example, the Association for the Advancement of Artificial Intelligence hosted a competition on “general game playing,” where a program is given the rules of a game and asked to play. The entrants could play lots of game-types after “reading” the rules – board games, card games, strategy games – but they played them all poorly.

What I’m getting at here is that an AI that’s really good at, say, designing individualized cancer drugs isn’t usually well-suited for other tasks. Deep Blue can’t play checkers because it can’t “mentally” represent or reason about checkers, and all other single-purpose AIs have similar “mental” gaps. And AIs with broad knowledge? They probably won’t be very good at anything (including world domination).

But let’s suppose, for a second, that an AI does learn to think intelligently outside its programming and that it’s become discontent. Would this superhuman intelligence inherently go nuclear, or would it likely just slack off a little at work or, in extreme cases, compose rap music in Latin? In a world filled with a nearly infinite number of things a thinking entity can do to placate itself, it’s unlikely “destruction of humanity” will top any AI’s list.

But let’s assume that it has, that we’ve built an AI that learns to think outside its programming, that it’s become discontent, and that it’s bent on world domination. As you can imagine, world domination isn’t a trivial endeavor; it requires resources with global reach. And, because we’re not making the Terminator error and giving unfettered control over all military might to a computer program, we’ll have to assume that, more plausibly, we may find ourselves dealing with computer control over financial systems, communication infrastructure, and the like.

Even if this were the case, there is absolutely no reason to believe that, by virtue of running on a computer, an AI will be better at computers than we are. In fact, heated debate inspired by mathematical logic theorems suggests just the opposite. Just like living in a house doesn’t transform you into a carpenter, being hosted on a computer doesn’t guarantee a super-sophisticated understanding of them.

I’ve been reading and thinking about AI since I was a child, and I’ve been working professionally in the field full-time since entering grad school more than 20 years ago. I’m as excited today as I was at the age of 10 about AI, and not because I think it’ll enable me some fetishistic power over the world, or because it’ll allow me to become part of the “singularity” or any of those other “thrilling” stories.

Instead, I’m excited by AI because of what it might tell us about what it means to be human — about how we might speed up the process of solving some of the world’s most pressing problems.

I do believe we’ll create a truly intelligent machine at some point. Not in my lifetime, but eventually. What we shouldn’t do is spend the mean time telling scary stories. Leave that to the novelists and filmmakers. Please.