Defining our relationship with early AI

“I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears…in…rain. Time to die.” — Roy Batty, Blade Runner

Artificial intelligence has fascinated mankind for more than half a century, with the first public mention of computer intelligence recorded during a London lecture by Alan Turing in 1947. More recently, the public has been exposed to headlines that have increasingly contained references to the growing power of AI, whether that’s been AlphaGo’s defeat of legendary Go player Lee Se-dol, Microsoft’s racist AI bot named Tay or any other number of new developments in the machine learning field. Once a plot device for science-fiction tales, AI is becoming real — and human beings are going to have to define their relationship with it sooner rather than later.

Peter Diamandis, co-founder and vice-chairman at Human Longevity, Inc., touches on that relationship in a post he authored on LinkedIn, titled “The next sexual revolution will be digitized.” Diamandis points to recent reports showing that the Japanese are increasingly abandoning sex and relationships, while a growing subset of men report that they prefer to have virtual girlfriends over real ones.

“This is only the beginning,” he said. “As virtual reality (VR) becomes more widespread, one major application will inevitably be VR porn. It will be much more intense, vivid, and addictive — and as AI comes online, I believe there will be a proliferation in AI-powered avatar and robotic relationships, similar to those characters depicted in the movies Her and Ex Machina.”

Our budding relationship with AI

Let’s back up a minute. Did Diamandis really say that he thinks people will begin to form relationships with AI robots? It’s not that hard to believe, given the example of men who prefer virtual girlfriends to real ones — but how close are we to actually creating an avatar that loves you back?

To answer this, first we have to understand what AI actually is, and what AI has come to represent to the public world. There are two basic types of AI: strong AI, and applied or “weak” AI (technically, cognitive simulation, or “CS,” is another type of AI, but we’ll be focusing on these first two for now).

Strong AI is a work-in-progress, but the ultimate goal is to build a machine with intellectual ability indistinguishable from that of a normal human being. Joseph Weizenbaum of the MIT AI Laboratory has described the ultimate aim of strong AI as “nothing less than to build a machine on the model of man, a robot that is to have its childhood, to learn language as a child does, to gain its knowledge of the world by sensing the world through its own organs, and ultimately to contemplate the whole domain of human thought.”

There is no better measure of the illusion of human intelligence than the ability to converse with a human as a human would.

Strong AI is also the type of AI that you hear about in the movies — the Skynet program that rebels against its human creators in the Terminator movies, or HAL 9000 of 2001: A Space Odyssey. It is predicted that if or when this type of superhuman intelligence is made possible and brought online, we will have triggered the singularity. This type of AI is years away from completion, if its completion is even possible — many contest that we will ever reach this point.

Weak/applied AI, on the other hand, is the type of thing you read about in the headlines. Anything with the adjective “smart” slapped onto it is generally relying on weak AI of some kind — any artificial form of intelligence that may “learn” and even figure out ways to write its own code, but that is limited in function to very few tasks.

The programs that drive smart cars, the chatbots that guide us through customer service and even the previously mentioned AlphaGo are all examples of weak or applied AI. These systems exist within the boundaries of a “micro-world” that the AI has recognized, and may even be so advanced as to be considered “expert systems.” These systems have no “common sense” or understanding of how their recommendations fit into a larger context, or the world beyond their micro-world. They are essentially complex input/output systems that specialize in one area, easily distinguishable from human intelligence based on these deficiencies.

The focus on interface

The focus on a human-like input/output system is what seems to capture much of society’s attention in terms of AI. There is no better measure of the illusion of human intelligence than the ability to converse with a human as a human would. This is evident in the fact that we’ve always put so much emphasis on the Turing test as a way to determine whether or not something is artificially intelligent. If the program can interface with one human and pass as another, bam: The average user would call that “AI.

If it doesn’t pass the Turing test, even if it’s close, we know that what’s on the other side of that screen is fake, and the genuine nature of the conversation is lost. Yet, even if we do know that we’re talking to AI, and that AI is able to navigate a conversation deftly, we’re often amazed by how human the interaction can feel — so much so, that we can suspend our disbelief and forget that we’re talking to a machine altogether.

Unfortunately, even within whatever micro-world it’s designed to serve, AI often doesn’t pass muster in conversation. Chatbots are where we see this in action almost daily. With Microsoft and Facebook announcing chatbot offerings earlier this year, many companies began turning to said technology to help improve customer engagement — but even Salesforce, whose customer service and support outranked all others in the TA CRM Market Index last year, is pointing out that chatbots simply are not yet where they need to be. The only way, it seems, to solve the problem of the inefficient chatbot is to make these systems act more… well, human.

Just how human should we make weak AI?

So here’s where all of this culminates. Chatbots and the interfacing aspect of AI isn’t going anywhere. Look at Siri or Cortana, for example. These are, technically, chatbots that double as virtual assistants — and they’re only going to become more advanced as time goes on. As is, these and other chatbots don’t pass the Turing test — and even if they did, somehow, it could be said that these machines still are not “intelligent” or “sentient” in any way, because they possess no understanding of the actual conversation being had. They, like the early Stanford communication programs named “Eliza” and “Parry,” rely on pre-programmed, canned responses to simulate conversation. To philosopher Ned Block, this makes these systems “no more intelligent than a juke box.”

Nevertheless, at a certain point, we have to ask ourselves just how human we are willing to make weak AI appear? Without understanding the differences between weak and strong AI, what types of psychological effects could deployment of an indistinguishable-from-human chatbot have?

There are upsides to AI that understand subtle nuances in human emotion.

Joseph Rauch, a writer for TalkSpace, a company that provides online messaging therapy, speaks on the need for verifying human-ness in his line of work.

“We frequently hear from potential clients who want to be sure they are chatting with a therapist, not a chatbot,” he writes. “All of our therapists are licensed, flesh and blood humans, but we understand the concern. Whether it’s online therapy, social media or online dating, everyone deserves to chat with the humans they believe they are connecting with.”

He mentions online dating, where chatbots have already been known to trick people into joining affiliate sites, or even exist just to make the male:female ratio seem less tilted toward males. But what if these chatbots were used in business? Going back to the CRM example, a group called Legion Analytics is trying to sell its lead-generation bot named Kylie, which understands small talk, will bring topics back up that have been previously mentioned (such as a child’s soccer game), and has even been flirted with by a prospect.

If bots like these become advanced enough, might people feel manipulated or even violated by machines that seemed to know them better than themselves? Especially if these bots truly are able to sell products to people better than the average human can? That’s obviously a long way off, but a chatbot well-versed in conversation and hooked up to a data warehouse with a complete psychological profile of you, the customer, might be able to make a sale using information synthesized with persuasive measures that a regular human just wouldn’t be able to tap into.

Teaching bots emotion

Of course, the way to really personify weak AI would be to teach it emotion — or at least teach it to emulate emotion — which Koko, a company co-founded by Fraser Kelton, purports to do. In an article on Fast Company, Kelton speaks on the need to provide a more human feel to chatbots: “We’re working toward providing empathy as a service to any voice or messaging platform,” he says. “We think that’s a critical user experience for a world in which you’re conversing with computers.” The article likens licensing an empathy API from Koko, which could be connected to virtually any chatbot, to sticking a heart into a robot.

There are upsides to AI that understand subtle nuances in human emotion. A recent study from JAMA showed that smartphone assistants such as Siri performed extremely poorly when responding to users who complain about sensitive issues, even going so far as to mock one user when asked for help with rape, sexual assault and sexual abuse. In a webinar concerning the next 20 years in healthcare, Carl W. Nelson, associate professor at Northeastern University D’Amore-McKim School of Business, also points out that “Big Data does have its challenges in terms of confidentiality and things that you would be worried about — but [can be used] appropriately to guide decision-making, to make judgements…” and how complete would an automated medical diagnostic system be without proper knowledge of human emotion to guide it?

So while there is a need for even weak AI to understand and emulate emotion, are we running the risk of creating a homunculus that feigns recognition of the human condition, and even may regurgitate cues to generate an emotional response in its user — even though these are “canned” responses. Will those with little-to-no knowledge of these bots begin to treat them as more than just a bot?

Consequences to society

As time goes on, it’s apparent that our technology will continue to astound us. As we see more movies and TV shows with AI robots, we will, no doubt, stop seeing these things as elements of science fiction and begin to wonder when they will become real. While movies such as Blade Runner played with the question long ago, recent advancements such as the Android Dick project, in conjunction with new shows like Westworld, make us realize that perhaps we’ll be dealing with the ethics of AI sooner rather than later.

The ethical question doesn’t center on whether or not these AI actually have feelings or rights or anything like that — but rather, what are the consequences to us as the humans who keep them? For example, how do you explain to a child that this indistinguishable-from-a-real-person butler is not and never was human, so it’s okay that you’re throwing him in the trash? Or that, à la Westworld, it’s alright to “kill” or “rape” them, because they’re not actually alive nor are they able to consent? When does emulation of life become just as important as life to a human?

These are all questions we’ll have to tease out over time, and for which there are no easy answers. Ultimately, we’ll have to define our relationship with AI, and find the thin, blurry line that separates weak AI from strong AI, if the latter is even possible. Hopefully, as we look into the mirror at these humanoid creations we’re constructing, we’ll learn more about and strengthen our own sense of humanity, instead of relinquishing it violently, or letting it wash away like tears in rain.