The era of AI-human hybrid intelligence

You hear a lot these days about the potential for impending doom as AI becomes ever smarter.

Indeed, big names are calling for caution: the futurist optimism of protagonists like Ray Kurzweil is outweighed by the concern expressed by Bill Gates, Elon Musk and Stephen Hawking. And Swedish philosopher Nick Bostrom’s scary thought experiments around what AI might lead to could well sustain a new strain of Nordic noir. There are, indeed, reasons to be concerned.

The fictional Hal’s refusal to open the pod bay doors in Kubrick’s 2001: A Space Odyssey seems a lot less like fiction than it did when the movie came out almost 50 years ago. Today, we have real reason to be concerned about the potential for autonomous drones making decisions about who to take out, or self-driving cars making a choice between hitting a roadside tree and hitting a child.

It doesn’t have to be like that. There is a better way to make use of AI, and the key is recognizing that human and machine intelligences are complementary.

The bottom line: Machines just ain’t as smart as people. Sure, we have machines that are able to play chess, Jeopardy! and now Go. But we’ve long left behind the era where we considered these to be the only relevant aspects of what it means to be smart.

It’s been 20 years since Daniel Goleman popularized the concept of emotional intelligence (EI). It doesn’t really matter whether or not you think EI is something that is appropriately referred to as a form of intelligence; there’s clearly some set of characteristics and capacities we have that machines do not share, and they play a key role in how we reason and act.

Old-school economists might still hang on to the notion that we are all rational decision makers, but the field of behavioral economics has demonstrated that there are bounds to the rationality of economic agents, and that much of our rationality is really post hoc rationalization.

We need to be exploring symbiosis, not competition.

What’s the takeaway from this? Put simply, machine intelligence and human intelligence are different things, and using similar terms for the two phenomena only serves to confuse things. A step in the right direction would be to stop talking about machines getting smarter; that’s an insult to smartness.

Yes, machines can do more and more things, and their logic becomes ever more complex, so they are able to respond appropriately to more complicated situations, and handle more parameters of variance. But our respective strengths lie in different arenas. And what that means is that we need to be exploring symbiosis, not competition.

This observation has a particular significance for the development of natural language generation (NLG) technology — machines that write. Except I’m not sure that’s really how we should describe it. That description is really a shorthand for saying that algorithms are developed that embody mechanisms for creating textual content as output, based on data provided as input. But that doesn’t have quite the same ring to it.

NLG co-authoring gives you the best of both worlds. Human authors bring their insights and nuance and their subtle understanding of audience. Machines can do the grunt work that would otherwise take a human author endless amounts of time, if it’s feasible at all, delivering detailed and accurate descriptive narratives about the information that would otherwise be left buried in data.

Of course, the same collaborative approach has gained traction in many other areas: from centaur chess, for example, through to solving problems in climate change and geopolitical conflict. The essence of the story is the same: Let the machines contribute their ability to solve what you can mechanize, but recognize that for the foreseeable future there are aspects of all kinds of problem solving that require a human touch.

We are not machines. Machines are not humans. We each bring something to the party. Except that machines don’t really go to parties, which just reinforces the point.