How I Learned To Stop Worrying And Love AI

Editor’s note: Dr. Nathan Wilson is co-founder and CTO of Nara Logics, a big data intelligence company creating a brain-based AI platform.

There’s a pragmatic approach to the great artificial intelligence debate, one that responsibly answers both the trepidations and aspirations of top scientists and technologists in this field.

I agree that anyone concerned with technology should be jolted awake by the warning that Stephen Hawking, Elon Musk Bill Gates and many scientists have now delivered. The early days of “let’s see what happens” with AI should justifiably be over. There is simply too much at stake in the coming decades with new technology that is already fundamentally challenging our privacy and autonomy as individuals and our attention and consciousness as human beings.

However, the new path raised by Musk and colleagues is equally risky. By drawing neon-sign outlines around what we fear most, we risk manifesting these exact fears into reality. By demonizing a crucial discipline, we make it less likely to be the chosen destination for many moral contributors and pursuits.

A “third way” exists to navigate the exploration of AI. When the Internet started, despite its military DNA, key contributors infused and amplified a “spirit” of openness, wonder and exploration that remain responsible for its positive outcomes around the globe today. It is possible this very spirit, rather than a culture of fear, will be crucial to incubating a nascent AI that resonates with our ideals.

By drawing neon-sign outlines around what we fear most, we risk manifesting these exact fears into reality.

Just as Carl Sagan urged us to approach extraterrestrial intelligence with an instinct of trust rather than a “guns drawn” suspicion, there is wisdom in employing the same logic with artificial intelligence – in such encounters, to quote William James, the belief will create the fact, and our creations take on the character of our temper.

Applying this spirit has three practical dimensions that are critical for defining this third approach and separating it from both rampant optimism and disabling fear.

First, a mandate to focus on a closer time horizon. Pundits are often drawn to the most intellectually stimulating ideas that are decades or even centuries away. But fundamental issues in the technosphere that will impact our “AI nature” are being decided right now. Battles for how we approach privacy, how we avoid digital overload and cyborgization, and how we interact in a machine-assisted world are being won and lost now.

This will reverberate in user interfaces and compute-power direction for decades to come, when AI starts to really get going. Simply put, as with children, we must hurl ourselves in to shaping AI in its early years, not by injecting it with post-hoc rules once it gets to college, which is essentially the approach of the open letter.

Second, for nascent AI to grow up well-adjusted with a moral and logical compass, it must be exposed to many voices and not just raised by cold logic. It’s encouraging to see social scientists, entrepreneurs and even philosophers like Nick Bostrom contributing foundationally to this discussion.

We should not be afraid to go deep with basic research.

This is another reason why we need more women and minorities in tech to add diverse and stabilizing perspectives. We need contributions from neuroscientists and psychologists that study cognition. Finally, we need to change the emphasis of our tech development from brunch apps in Silicon Valley to problems of the world at large – this will determine the goals and evolution of our AI decades from now more than any theoretical seat belts and crash helmets we might try to put in place later.

Finally, to nurture the moral, humanity-supporting applications of AI, we should not be afraid to go deep with basic research, because this will teach us more about who we are and thus help erode the real challenge of AI, that it is not “people vs. nature” but “people vs. themselves.” Toward this end, a growing number of academic researchers and a new breed of companies and technologies like Google’s DeepMindVicarious, IBM Watson and our team at Nara Logics are developing a primitive new class of “brain AI.”

This sort of boundary-pushing is a natural area of concern, but such explorations should not be shunted — this brain-like AI is application-neutral and brings us closer to understanding our own minds, morals and decision-making processes, thus uncovering the definitions for human-supportive applications. It is only with this deeper exploration and self-knowledge of our own mechanics that we can arbitrate and reconcile the concerns that will increasingly arise.

As Bill Gates said, “I… don’t understand why some people are not concerned.” Accepting that wisdom, we propose a “selective optimism” to excite, rather than a blanket concern to inhibit, as a more dexterous way to sculpt our outcome. In that spirit, if AI can be made into an ally in our own self-discovery and development, our society will experience the same degree of wonder and positivity as we did at the dawn of the space age, as well as the remarkable progress and openness brought by the Internet age.