Imagine yourself a passenger in a futuristic self-driving car. Instead of programming its navigation system, the car interacts with you in a near-human way to understand your desired destination. The car has learned your preferences for music, temperature and lighting; these are adjusted without the need to twist a knob.
Two distinct paths of technological evolution are advancing technology to create this future (and more): virtual intelligence (VI) is planned, controlled and predictable; in sharp contrast, artificial intelligence (AI) is none of these.
Real AI should think and reason like a living organism. It should evolve and adapt with its environment. Advancement of AI technology relies on both recognizing the distinction between AI and VI, and understanding the ways that AI will be integrated into the lives of its users.
Our daily interactions increasingly take place in virtual environments. We rely on enhanced digital interactions and shared information through the use of avatars, social platforms and interactive videos, games, meetings and training. These virtual worlds enable learning, business and expanded social relationships. However, these virtual environments are completely dependent on human input and management.
Humans set the parameters and establish the controls for each virtual environment. The intelligent software and computing technology that helps facilitate our online interactions and mimics real life is known as VI. Such technology is useful in solving real-world problems, but it is not “self-aware” and is limited in its capabilities and operation. Ultimately, VI stops short of learning or abstract thought.
For an intelligent creature to be successful as a self-aware, adaptive being, its structural foundations must rely heavily on biological and environmental signals. In this vein, my colleague John Carbone and I have developed robotic cockroaches with a distributed intelligence system similar to the distributed brain system within an octopus. The three neurons for each distributed “leg” of the brain, along with a central mediator (artificial prefrontal cortex), helps the bugs live autonomously, adjust to changing conditions and remain self-aware.
With animal instincts like hunger, the bugs will seek out light to charge their diminishing power sources, but the light simultaneously signals danger and harm. They are inherently nocturnal (adverse to light) and are also programmed to recognize that too much time in the light leaves them vulnerable to predators — simulated by an infrared light on another robot. As a result, they must resolve how to balance competing instincts.
We must see beyond the narrative of science fiction and ensure we are working toward AI development that allows us to harness its immense potential.
These artificially intelligent bugs learn from their surroundings and transform their behavior to survive. Like humans, each AI is unique and responds differently to every test.
While AI is still in the early stages of development, the true advent of AI in the next decade could change the way cars are designed, how pilots fly and how information is delivered. The U.S. military, in particular, has sought innovation in AI as part of its plan to find new ways of pairing humans and machines.
In today’s modern warfare, for example, it takes multiple people to command and control one unmanned aerial vehicle (UAV). The UAV itself is a source of VI — a computing device capable of operating remotely according to human control and design. It is not, however, autonomous.
Imagine if only one person were required to operate a fleet of AI drones that fly and approach targets independently. If for some reason a controller lost contact during a mission or a drone were destroyed, the remaining drones could reorganize, fill the void and continue — and ultimately complete — the mission.
Beyond warfare, UAVs could scout disaster zones for survivors or track global weather patterns — all with greater efficiency and effectiveness. AI devices could be in-home helpers to meet some needs of people requiring care. The potential for both VI’s and AI’s applications is vast.
Today’s research and development within the AI sector will lead to systems of greater complexity and capability. These systems will transition our computing interactions from one of VI to one of artificial intelligence with significantly more synergies between humans and machines. This pairing, based in biology, between humans and machines has immense potential for advancement.
The AI platforms of our future will revolutionize disaster response, global logistics, warfare and other important areas of our lives. This future is not without concern for some, however, as the powerful nature of this technology is paired with its depiction in popular culture. AI can be fertile ground for a scary movie plot and the public perception often leaves innovators with a significant, challenging stigma against AI’s many benefits.
Contrary to Hollywood’s view, AIs will not develop destructive cognitive functions or irrational judgements. They will have the capabilities, however, to develop rudimentary emotional responses, but will lack the emotional complexity and attachment humans possess.
Think again of that futuristic car. A self-driving and self-navigating system can quickly take on a much more welcomed persona through artificial intelligence with personal connections to its user, and solutions offered to match user habits and personality traits.
There are many psychological factors influencing what is sometimes a negative public perception of AI technology. We must see beyond the narrative of science fiction and ensure we are working toward AI development that allows us to harness its immense potential. We should establish training and governance for development of human and AI interaction as these become more prevalent and integrated into everyday life.
The learning machines of tomorrow have much to offer, but we must earn that future by replacing any irrational fear with continued research and development, driven by our very human desire to improve and innovate.