In the last few years, the topic of artificial intelligence (AI) has been thrust into the mainstream. No longer just the domain of sci-fi fans, nerds or Google engineers, I hear people discussing AI at parties, coffee shops and even at the dinner table: My five-year-old daughter brought it up the other night over taco lasagna. When I asked her if anything interesting had happened in school, she replied that her teacher discussed smart robots.
The exploration of intelligence — be it human or artificial — is ultimately the domain of epistemology, the study of knowledge. Since the first musings of creating AI back in antiquity, epistemology seems to have led the debate on how to do it. The question I hear most in this field from the public is: How can humans develop another intelligent consciousness if we can’t even understand our own?
It’s a prudent question. The human brain, despite being only about 3 pounds in weight, is the least understood organ in the body. And with a billion neurons — with 100 trillion connections — it’s safe to say it’s going to be a long time before we end up figuring out the brain.
Generally, scientists believe human consciousness is a compilation of many chemicals in the brain forced though a prism that produces cognitive awareness designed to insist an entity is aware of not only itself but also the outside world.
How can humans develop another intelligent consciousness if we can’t even understand our own?
Some people argue that the quintessential key to consciousness is awareness. French philosopher and mathematician René Descartes may have made the initial step by saying I think, therefore I am. But thinking does not adequately define consciousness. Justifying thinking is much closer to the meaning that’s adequate. It really should be: I believe I’m conscious, therefore I am.
But even awareness doesn’t ring right with me when searching for a grand theory of consciousness. We can teach a robot all day to insist it is aware, but we can’t teach it to prove it’s not a brain in a vat — something people still can’t do either.
Christof Koch, chief neuroscientist at the Allen Institute for Brain Science, offers a more unique and holistic version of consciousness. He thinks consciousness can happen in any complex processing system, including animals, worms and possibly even the Internet.
In an interview, when asked what consciousness is, Koch replied, “There’s a theory, called Integrated Information Theory, developed by Giulio Tononi at the University of Wisconsin, that assigns to any one brain, or any complex system, a number — denoted by the Greek symbol Φ — that tells you how integrated a system is, how much more the system is than the union of its parts. Φ gives you an information-theoretical measure of consciousness. Any system with integrated information different from zero has consciousness. Any integration feels like something.”
If Koch and Tononi are correct, then it would be a mistake to ever think one conscious could equal another. It would be apples and oranges. Just like no snowflake or planet is the same as another, we must be on our guard against using anthropomorphic prejudice when thinking about consciousness.
The human brain is the least understood organ in the body.
In this way, the first autonomous super-intelligence we create via machines may think and behave dramatically different than us — so much so that it may not ever relate to us, or vice versa. In fact, every AI we ever create in the future may leave us in very short order for distant parts of the digital universe — an ego-thumping concept made visual in the brilliant movie Her. Of course, an AI might just terminate itself, too, upon realizing it’s alive and surrounded by curious humans peering at it.
Whatever happens, in the same way there is the anthropological concept cultural relativism, we must be ready for consciousness relativism — the idea that one consciousness may be totally different than another, despite the hope that math, logic and coding will be obvious communication tools.
This makes even more sense when you consider how small-minded humans and their consciousness might actually be. After all, nearly all our perception comes from our five senses, which is how our brain makes sense of the world. And every one of our senses is quite poor in terms of possible ability. The eye, for example, only sees about 1 percent of the universe’s light spectrum.
For this reason, I’m reluctant to insist on consciousness being one thing or the other, and do lean toward believing Koch and Tononi are correct by saying variations of consciousness can be seen in many forms across the spectrum of existence.
This also reinforces why I’m reluctant to believe that AI will fundamentally be like us. I surmise it may learn to replicate our behavior — perhaps even perfectly — but it will always be something different. Replication is no different than the behavior of a wind-up doll. Most humans hope for much more of themselves and their consciousness. And, of course, most AI engineers want much more for the machines they hope to give a conscious rise to.
Despite that, we will still try to create AI with our own values and ways of thinking, including imbuing it with traits we posses. If I had to pinpoint one behavioral trait of consciousness that humans all have and should also be instilled in AI, it would be empathy. It’s empathy that will form the type of AI consciousness the world wants and needs — and one that people also can understand and accept.
On the other hand, if a created consciousness can empathize, then it must also be able to like or dislike — and even to love or hate something.
For a consciousness to make judgments on value, both liking and disliking (love and hate) functions must be part of the system.
Therein lies the conundrum. In order for a consciousness to make judgments on value, both liking and disliking (love and hate) functions must be part of the system. No one minds thinking about AI’s that can love — but super-intelligent machines that can hate? Or feel sad? Or feel guilt? That’s much more controversial — especially in the drone age where machines control autonomous weaponry. And yet, anything less than that coding in empathy to an intelligence just creates a follower machine — a wind-up doll consciousness.
Kevin LaGrandeur, a professor at the New York Institute of Technology, recently wrote, “If a machine could truly be made to ‘feel’ guilt in its varying degrees, then would we have problems of machine suffering and machine ‘suicide’”? If we develop a truly strong artificial intelligence, we might — and then we would face the moral problem of creating a suffering being.
It’s a pickle for sure. I don’t envy the programmers who are endeavoring to bring a super intelligence into our world, knowing that their creations may also consciously hate things — including its creators. Such programming may just lead to a world where robots and machine intelligences experience the same modern-day problems — angst, bigotry, depression, loneliness and rage — afflicting humanity.