When Will The Machines Wake Up?

Machines matter to people. But, they “matter” only because they affect people. It’s widely supposed that today’s machines themselves cannot be “affected” — because they have no feelings, no conscious thought, no sentience.

Interestingly enough, it might not always be that way.

While biology has held a relatively firm monopoly on “consciousness” over the last few hundreds of millions of years, many researchers in the domain of machine learning are of the belief that, eventually, humans may replicate self-awareness and inner experience (rough terminology that we’ll use as representative of the broad term “consciousness” for the sake of this article) in our machines. And some of their guesses are sooner than one might expect.

Over the last three months I’ve interviewed more than 30 artificial intelligence researchers (essentially all of whom hold PhDs). I asked them why they believe or don’t believe that consciousness can be replicated in machines.

One of the most common contentions as to why conscious will eventually be replicated is based on the fact that nature bumbled its way to human-level conscious experience, and with a deeper understanding of the neurological and computational underpinnings of what is “happening” to create a conscious experience, we should be able to do the same.

Professor Bruce MacLennan sums up the sentiments of many of the researchers in his response: “I think that the issue of machine consciousness (and consciousness in general) can be resolved empirically, but that it has not been to date. That said, I see no scientific reason why artificial systems could not be conscious, if sufficiently complex and appropriately organized.”

It might be supposed that attaining conscious experience in machines may require more than just a development in the fields of cognitive and computer science, but also an advancement in how research and inquiry are conducted. Dr. Ben Goertzel, artificial intelligence researcher behind OpenCog, had this to say: “I think that as brain-computer interfacing, neuroscience and AGI develop, we will gradually gain a better understanding of consciousness — but this may require an expansion of the scientific methodology itself.”

Some researchers hold even greater optimism, and believe that in some form or another, machines may already be conscious (such as Dr. Stephen Thaler of Imagitron, LLC), or have a good likelihood of obtaining consciousness within the next five years (like Dr. Pieter Mosterman of McGill University in Canada); others are less hasty with their timelines.

Nature bumbled its way to human-level conscious experience … we should be able to do the same.

MIT’s Dr. Joscha Bach put his rough estimate for machine consciousness at 2101-2200 (along with a few others who guessed that same time frame), and Dr. Sean Holden of Cambridge University believes that despite seeing no insurmountable obstacle, conscious machines may not exist until the time frame between 2201-3000. Dr. Holden sums up his perspective: “Yes, it’s possible. Humans are made from stuff that obeys the laws of physics — they constitute an existence proof. The difficulty is just that of working out how the machine (taken in a very wide sense) works and how to build an equivalent.”

Indeed, that is the difficult part.

It could be that many of the “optimistic” researchers are aware of all the “impossible” feats that have been beaten to smithereens by time and focused scientific inquiry within their lifetimes (from the moon landing to mapping the human genome, and beyond). I wanted my inquiry to pry beyond just their inclinations as to if machine consciousness could happen; I asked them when.

The results from the survey, shown in the graphic below, included 32 responses from different AI/cognitive science researchers. (For the complete collection of interviews, and more information on all of our 40+ respondents, visit the original interactive infographic here on TechEmergence).

Conscious Machines 2-01

The most popular range across all the respondents was the third time frame, 2036-2060. The second highest response (behind the respondents who chose not to give a date range at all) was the second time frame, 2021-2035.

Though some researchers supposed a longer time frame, and some a shorter time frame, the bulk of the responses (totaling nearly 50 percent of the respondents who were comfortable making a prediction) were in the 2021-2060 time frame.

Some of these time frame estimates seem to couch logically with Dr. Nick Bostrom’s poll of artificial intelligence researchers in 2012-2013. Bostrom asked 170 artificial intelligence researchers to estimate with 50 percent confidence when human-level machine intelligence might be developed (i.e., machines that can not only play chess, but write poetry, manage businesses, do all the things that humans do), finding a median response of 2040 (I would encourage you to see the full report here.)

Predicting the future is notoriously difficult, and hardly any of my own respondents would express anything close to “certainty” about events in the future. However, if legitimately aware and conscious machines are to exist within our lifetime, we may have new questions on our hands.

If a machine became conscious enough to feel, even at the level of a dog or squirrel, should we not have laws to protect them from types of abuse or neglect?

If machines were in fact able to consciously “feel” physical or emotional sensations, would we be obligated to program them to only experience happiness and bliss?

If machines that were approaching human general intelligence were to be endowed with consciousness, would this potentially make them more willful and less easily controlled by their human creators?