AI is not a threat to humanity, but an Internet of ‘Smart’ Things may be!

After more than 60 years since its conceptual inception — and after too many hype-generating moments — AI is yet again making its presence felt in mainstream media.

Following a recent WEF report, many perceive AI as a threat to our jobs, while others even go so far to assert that it poses a real threat to humanity itself.

What is clear for the time being is that there are many questions that still remain unanswered: Can we actually create conscious machines that have the ability to think and feel? What we do we mean by the word conscious in the first place? What is the accurate definition of intelligence? And what are the implications of combining the Internet of Things (IoT) with intelligence?

In the early 20th century, Jean Piaget remarked, “Intelligence is what you use when you don’t know what to do, when neither innateness nor learning has prepared you for the particular situation.”

In simpler terms, intelligence can be defined as doing the right thing at the right time in a flexible manner that helps you survive proactively and improve productivity in various facets of life.

There are various forms of intelligence: There is the more rational variety that is necessitated for intellectually demanding tasks like playing chess, solving complex problems and making discerning choices about the future. There is also the concept of social intelligence, characterized by courteous social behavior. Then there is emotional intelligence, which is about being empathetic toward the emotions and thoughts of the people with whom you engage.

We generally experience each of these contours of intelligence in some combination, but that doesn’t mean they cannot be comprehended independently, perhaps even creatively within the context of AI. Most human behaviors are essentially instincts or reactions to outward stimuli; we generally don’t think we need to be intelligent to execute them.

In reality though, our brains are wired smartly to perform these tasks. Most of what we do is reflexive and automatic, in that we sometimes don’t even need to be conscious of these processes, but our brain is always in the process of assimilating, analyzing and implementing instructions. It is very difficult to program robots to do things we typically find very easy to do.

Focusing on solving the right problems with AI

The key predicament with AI is that since its nascent stages of development, scholars have attempted to begin with problems that are difficult for us (humans) to solve and that require a lot of logical thinking (for example, playing chess). This assumption was premised on the fact that the problems we don’t need to think hard about are easier to solve.

The extent to which human intelligence gets manifested is predicated on the symphony between our bodies and brains.

However, we are now starting to realize that a chess-playing computer is intelligent, but it’s a narrow form of intelligence. Actually, it is everyday intelligence that is difficult for AI, replicating actions that we are good at without knowing about it (for instance, making a cup of tea in someone else’s kitchen). Hence, after 60 years of hype surrounding AI, the things that we originally thought were easy are actually difficult in the realm of AI, and vice versa. In retrospect, it was not a good idea to set such high levels of expectation for AI.

Using different AI and machine learning models, we are trying to perform the task of synthesizing, modelling and mimicking natural intelligence. Understanding natural intelligence is the most systematic way of developing AI.

Many novel models are trying to simulate biologically inspired systems by attempting to mimic very elementary behavior of animals or insects. The rationale is that doing some of the most natural things is something that is even accomplished by most animals, because if they are unable to do so, they would starve, fall prey to other animals or miss out on mating opportunities.

Even single-celled organisms have the penchant to “do the right thing at the right time” in order to survive, and it is apparent that they are equipped with the cognitive machinery to exercise value judgements, which is the key to understanding such behavioral processes. This becomes more evident when we consider that even the simplest of creatures that may otherwise be bereft of a conscious or explicit value judgement system have been imparted (through evolution) the discerning ability to know that doing a certain “A” set of things will make it more probable for them to survive, as opposed to doing a certain “B” set of things.

This highlights the undeniable fact that such models are too generic. They may apply to a worm or bacteria, but we would do well to define intelligent behaviors in a manner that underpins the essence of intelligence as something that empowers organisms to go beyond their immediate environmental constraints and opportunities.

The extent to which human intelligence gets manifested is predicated on the symphony between our bodies and brains. Human history is replete with instances where this progressive form of intelligence has led to great accomplishments triggered by collective goals for harmony and development.

At the same time, it is this very intelligence that has been misguided and subverted to cause a number of catastrophic events, like wars and other atrocities. It has been proven time and again that intelligence, when polluted with negative traits like greed, avarice and vindictiveness, can become a curse to mankind.

The superiority of the collective mind

The recently deceased Marvin Minsky, the Godfather of AI, talked about the society of mind, where our cognitive processes and cognitive architecture is not confined to one particular place and is not an isolated process; it is the collective behaviors of many subtly ingrained concepts that our minds are capable of.

Intelligence, when polluted with negative traits like greed, avarice and vindictiveness, can become a curse to mankind.

While this is not a new concept in the domain of AI, it assumes great significance, given that our brains are massively complex networks of 190 billion neurons that are in constant communication with each other. It is incredible that we continuously seem to understand “what is going on.” Neurons start triggering and responding to a multitude of fast-paced reactions, setting up patterns within the brain that subsequently decide the course of actions to be performed by our mind and body in perfect unison.

From an AI standpoint, this links well with problem-solving models like The Searle’s Chinese room argument, which stipulates that if you have a room full of people enacting an algorithm, all you need to do is insert some question and without realizing what’s happening, they would follow the algorithm, and that will produce an answer. The inference was that this is essentially what our brain continuously does — something that we can simulate in a computer as a universal Turing machine.

The dangers of combining IoT with collective thinking

This links surprisingly well with the emergence of the Internet of Things (IoT), which has accompanied promising new opportunities — but it has also paved the way for a number of additional threats. We are creating new complex networks or interconnected devices that can share infinite amounts of information. In conjunction with appropriate AI architectures and optimized machine “learning” models, this situation may eventually lead to an accurate imitation of the human brain on a much bigger scale.

I am not just speaking about traditional artificial neural networks. I am proposing that under the garb of IoT, we may start creating intricate networks of machinery that are intelligent enough to start understanding things as inexplicable as human irrationality and humans’ dependence on machines.

The fear does not arise from building humanoid robots (androids), because 99.99 percent of the real-world problems (the so-called useful problems that we hope to solve by leveraging robots) do not require an android body. For example, an intelligent driverless car does not require a humanoid robot that gets in the car to drive it.

However, that intelligent connected car can certainly collect a lot of useful information about those using it, whether it’s their patterns, behaviors, preferences or coordination with other intelligent devices (seat belts and automobile gadgets, etc.). Once these traits get imbibed in a pattern that renders them smart, there is no reason why humans would not be enamored and eventually enslaved by such machines.

A case in point is the ubiquitous obsessive behavioral patterns when it comes to smartphone usage, a phenomenon that has assumed alarming proportions. IoT is likely to exacerbate this problem, and incorporating AI into IoT to create an Internet of “Smart” Things may lead to some seriously unpleasant outcomes.

Incorporating creativity was always one of the biggest challenges in the advancement of AI, and it has almost become the antecedent of a chess-playing situation, because chess is transformed into a sublimely creative game when played by humans. However, the way a computer plays the game cannot be deemed creative because it calculates a vast number of possibilities and runs through them in a programmed, calibrated manner using machine learning heuristics.

When the first computer beat Garry Kasparov in 1997, it did not imply much in terms of AI consuming the world in its grip. It merely highlighted the real problems that were surfacing in the human cognitive process, the intrinsic unpredictability of the human mind and the plethora of intrusive yet persisting thoughts that sometimes make us behave irrationally.

Will we design our own executioner?

Despite being fully cognizant about our unpredictability, we are adept at taking strategic shortcuts at appropriate times, which makes it abundantly clear that it is our ecological smartness and creativity that has got us to where we are now. The problem is not so much with the machine that plays chess as it is with the human on the other side, and also the oddly-positioned pieces that propel us to unfathomable limits in our overzealous ambition to win at any cost.

AI, like other forms of modern technology, can go on to become incredibly beneficial for us. Whether or not that happens is in the realm of conjectures, given the inevitable human tendency to misuse just about anything that makes our lives easier.

As responsible developers of technology, we may want to ensure that our comfort-driven instincts do not take precedence over our larger commitment to inclusive economic growth, more compassionate societies and a better world at large.