Nature magazine announced in late January that a computer designed by Google’s DeepMind defeated a human master in the ancient Chinese board game, “Go.” This impressive achievement once again raised the expectations for a predicted future in which computers will have artificial intelligence, with major media outlets worldwide touting this anticipated future.
One of the major questions raised in response to DeepMind’s achievement is what are the outer limits, if any, of intelligent machines? In November of last year, Dr. Kira Radinsky, a computer scientist and “machine learning” expert, argued in the Israeli newspaper “Ha’aretz” that computers will be able to accurately predict the outcome of the Israeli-Palestinian conflict. Feed the computer enough data on a number of “parallel universes,” she wrote, and the computer will be capable of observing the implications of each of these universes and then find patterns, allowing predictions to be made about the future of the conflict.
While this argument, in theory, sounds plausible, computers are not “creative,” do not “learn” and cannot “predict.” Computers can only be tasked with making inductive predictions based on past experiences. They can then seek complex correlations in the databases in order to present them as “Actionable Insights.”
No matter which side of the debate one falls on, DeepMind’s achievement requires us to reexamine in a more accurate way what learning and prediction actually mean.
There are two main obstacles that prevent machines from learning and predicting in the way humans do: Firstly, as mentioned above, because computers can only be tasked with making “inductive” predictions based on past experiences, the future they predict will always be a continuation of the past behavior of the actors whose behavior they are examining and trying to predict.
What that means is that the predictive powers of computers will work nicely in cases where reality does not change dramatically. However, it will fail in any case where there are dramatic, unpredictable, changes in the future.
Secondly, it is well known that correlation does not equal causation. While computers may be very good at finding correlations with high statistical levels of confidence, they also can’t judge whether or not the correlations are real or ridiculous.
For example, the website Spurious Correlations presents such correlations, citing (with a very high level of confidence) the correlation between U.S. government spending on science and the number of suicides by hanging. The more data computers collect, the more spurious the correlations are that can be found. Only human agents, however, because they have the ability to understand and grasp meaning, can distinguish between meaningful and meaningless correlations.
Computers do not posses any capacity to imagine a different future.
Additionally, humans, unlike computers, have a unique capacity to not only learn from the past, but to also invent a new future, giving us the ability to imagine a future that does not yet exist. For example, technical inventions demonstrate humanity’s capacity to invent a future that is intrinsically different from historical experience. Only humans could have dreamt up the complex technologies that we have come to take for granted. Computers, on the other hand, do not posses any capacity to imagine a different future.
Given the fact that humans do have this inherent capacity and ability to imagine and create, future changes in markets or geopolitical conditions (that are mostly due to human actions) cannot be predicted based simply on past events.
When factored into something as complex as the Israeli-Palestinian conflict, the war against ISIS, the futures markets or the financial industry, the human element can significantly swing the outcome — and the predictions of a computer will fail to identify the new situation. If one wants to predict future human behavior, human analysts must be deployed to study the data and get to the right conclusions. Computers will not suffice.
For example, if Abu Mazen gives up his demand for the right of return for the refugees, while it may run contrary to public opinion, as well as represent a complete betrayal of all his previous statements and beliefs (hence it will not be predictable from something like Facebook sentiment analysis), given the fact that he has the free will to do so, he can reverse direction, effectively changing the course of the entire discussion.
This brings to mind Ariel Sharon’s reversal on his longstanding insistence that he would not withdraw Jewish settlers from the Gaza Strip, something that he ended up doing in the summer of 2005. Machines do not have the capacity to predict such radical deviances from what is expected to occur, while human analysts will portray different scenarios and argumentations in favor and against varied outcomes.
The machine versus human debate has actually divided big data analytics experts into two camps. The first camp is led by “machine learning” and “predictive analytics” experts who argue for a future in which computers will possess real “artificial intelligence,” while the second camp argues that only human analysts can reliably make conclusions based on the vast amounts of data collected and stored by humanity.
The most prominent company promoting such a view is Palantir, a $25 billion company founded by PayPal alumni. Palantir is developing big data analytics software whose main purpose is to assist human analysts in studying big data. Similarly, in his book Zero to One, venture capitalist Peter Thiel states that “while computers can find patterns that elude humans, they don’t know how to compare sources or how to interpret complex behaviors. Actionable insights can only come from a human analyst.”
The author of this article stands firmly within this camp, arguing that human capabilities far transcend anything computers can achieve.