OpenAI, a nonprofit artificial intelligence research company, announced itself to the world on December 11, 2015. With $1 billion in funding from high-profile investors, such as Elon Musk, Reid Hoffman and Peter Thiel, the company put forward an ambitious research agenda to keep artificial intelligence beneficial to humanity.
Both the research agenda and the objective of the company are based on the premise that machines, in the future, can reach beyond human-level intelligence and potentially turn against humankind.
This scenario is thoroughly explained in Ray Kurzweil’s books The Age of Spiritual Machines and The Singularity Is Near, as well as in numerous essays and articles. Kurzweil argues convincingly in favor of the singularity hypothesis, and gives the human race a 50 percent chance of survival.
Leading technologists and investors are not only convinced by his argument, but also are willing to fund research on how to manage and control the potential superintelligence. At the same time, some of the most influential intellectuals in the 20th century, including Daniel Dennett and Noam Chomsky, dismiss the idea of singularity as an urban legend or even as science fiction.
So, what should we believe?
Maybe we should think less about singularity and more about how augmentation technology will change the human condition in a slow and gradual manner.
A brief survey of the 2015 Edge question shows that hardcore scientists and technologists tend to give at least some support to the possibility of singularity happening in our lifetime, whereas social scientists, philosophers and intellectuals are more sceptical. It almost seems like C.P. Snow’s much criticized essay, The Two Cultures, is still relevant more than 50 years after its initial publication, and after several decades of inter- and multidisciplinary research programs.
Although there are certainly more than two perspectives on singularity, technologist’s and intellectual’s positions tend to fall into two distinct categories. Perhaps not surprisingly, most technologists favor a materialist perspective based on the scientific paradigm. Intellectuals, on the other hand, often prefer a more philosophical approach.
The key question seems to be whether one believes that the human brain is a machine or not.
AI scientists and technologists answer this question with a certain level of confidence. ”Of course, the human brain is a machine,” they say, ”it is only a matter of time and research funds before we have solved the problem of creating a truly human-like intelligence.” Even sceptics, such as Paul Allen, believe that singularity may happen eventually. After all, ”an adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort.”
The underlying assumption, of course, is that at some point in the future, scientists will be able to turn dead matter into life. If this can be done, exponential growth in computing power, the intelligence explosion and the corresponding control problem would certainly present humankind with serious existential risk.
If, on the other hand, the human brain is not a machine, if it is something else, something more, perhaps something with a soul, whatever that is, then human consciousness is not a technological, but a philosophical problem, and as such not possible to solve through the application of scientific method.
From an intellectual’s perspective, science is a language game and superintelligence a big word. We hardly know what intelligence is, let alone superintelligence. All too often, we assume that signs are reliable representations of subjective phenomena, when in fact they’re not. There is, and will always be, a gap between reality and our representations of reality.
Proving a scientific theory or hypothesis is not the same as getting to the truth, and certainly not sufficient to predict the future. Intellectuals are usually sensitive to this kind of argumentation, and frequently get frustrated by detailed mathematical equations based on more or less unrealistic assumptions.
Maybe we should think less about singularity and more about how augmentation technology will change the human condition in a slow and gradual manner. Maybe the real superintelligence is not a singular entity, but a networked environment in which thinking is no longer an individual activity. Maybe the global brain consists not only of all connected devices, but also the connected humans using those devices, like an omnipresent cyborg with billions of beating hearts.