The Militarization Rate Of Technology And Elon Musk’s AI Worries

We recently marked the 112th anniversary of the Wright Brothers’ first powered, manned flight. The flight lasted 12 seconds. That one-fifth of a minute has continuously reverberated throughout technological advancements ever since.

It took 95 years to launch the first components of the International Space Station, an incredibly quick progression, considering the millions of years humans remained flightless. Even sooner was the use of the airplane in combat, with a mere 11 years between the Wright Brothers’ first success at Kitty Hawk and the adoption of aircrafts as weapons by the British military during World War I.

This militarization of technology is an eternal process. Remember the “Dawn of Man” segment at the beginning of Stanley Kubrick’s 2001: A Space Odyssey? Bones were the first tools utilized by man, which they used to beat the s*** out of other beings. The scenario is more than probable, considering the ever-entwined roles technology and violence have in the world.

Tech in the military

It goes without saying that some of the world’s most advanced technology can be found in the hands of the United States military. Drones. Nuclear bombs. Heat rays. Radar that can see through buildings. These technologies often originate in the military, or organizations commissioned by it; sometimes they trickle down into consumer products. The microwave oven, whose parents were World War II radar systems, is the classic example. Decades before the World Wide Web reached total ubiquity, military networks like ARPANET and MILNET required a unifying system to simplify communications. Thus began research on an “Internet.”

Technology and the military grew up together, and it seems that it will always be that way. This is what puts innovators like the Wright Brothers in strange, ethically ambiguous positions. What if your creation is used for destruction? Should you share some of the blame? Were there precautions you could have taken to prevent its misuse?

Enter Elon Musk, the man who will attempt to answer these questions.

Forever environmentally minded, he has played a key part in developing Tesla’s incredibly efficient electric cars, Hyperloop’s conceptual near-supersonic transit system, SpaceX’s reusable rocket that can land vertically and SolarCity’s clean energy grid. To expedite the development of electric vehicles, he even opened Tesla’s patents to be used by other auto manufacturers. He also has been an advocate for manufacturers’ rights to sell directly to customers, bypassing dealerships. Since electric cars need relatively little maintenance, dealerships, who make good money from repairs, are less likely to sell them.

It’s hard to doubt Musk’s integrity, making the announcement of his newest project, OpenAI, a strange one.

Open-sourcing artificial intelligence

Artificial intelligence (AI) has long been a fascination and a worry of writers and artists like Isaac Asimov, Arthur C. Clark and Stanley Kubrick. More recently, physicist Stephen Hawking, Apple co-founder Steve Wozniak and Elon Musk have joined thousands of others in signing an open letter that pledges to only research AI for “good.”

Musk, seemingly unconvinced that the letter would be enough, has since made a further attempt at steering AI’s fate in the right direction. A partnership of Musk and Y Combinator’s Sam Altman, OpenAI is an organization that will research and develop artificial intelligence. Just like Google and Facebook, right? Sort of.

Develop something new, do it well, then let others build on the design.

Much like Tesla’s vehicle designs, OpenAI’s technology will be available for everybody to use. For free. Currently, Google and Facebook have similar open-sourcing operations with their respective AI projects, but Elon Musk has his doubts about the permanence of those arrangements. “As time rolls on and we get closer to something that surpasses human intelligence,” he explains to Backchannel, “there is some question how much Google will share.”

Might open-sourcing this technology aid in its exploitation? Musk and Altman think it’s likely, but with the technology available to everyone, they hope the good uses will outweigh and discourage the bad. AI won’t be only in the hands of, say, Big Brother.

The unstoppable momentum of technology

There is a certain inevitability that accompanies any technology that’s being worked on by separate groups simultaneously. If the Wright Brothers weren’t the first to fly, someone else would have been. Despite the warnings Elon Musk and others have delivered to the world about artificial intelligence, it is being developed by many, many researchers, organizations, companies, etc. The technology will continue to progress without hinging on any individual. And it will be used in ways that many would consider unethical.

Musk demonstrates his understanding of this in almost every project in which he’s involved. Develop something new, do it well, then let others build on the design. Through this process he, in a way, washes his hands of the negative side effects of his developments. OpenAI hands artificial intelligence to people who could develop it for the common good where it would otherwise exclusively belong to those would use it solely to their own advantage. The technology is coming, whether we like it or not. Open-sourcing it will give it an early chance at being used for good.

Musk and Altman have attracted researchers from NYU, UC Berkeley, Stanford, Carnegie Mellon and University of Amsterdam, as well as Microsoft and Google alumni, to work on OpenAI. Part of what drew these people to the project, Altman tells Wired, is the universal availability of their developments. “The people we hired love the fact that it’s open and they can share their work,” he says.

Most of these employees have experience working on artificial intelligence with other organizations, giving them a bit of a head start on the project. OpenAI should hardly be seen as a newcomer to the AI game.

Why OpenAI can work

The team, of course, will be a major determining factor in OpenAI’s fate, but much more will play into its potential success. While private corporations have business secrets to protect, the open-source model will allow researchers to collaborate with those outside of the company, as they have nothing to hide. OpenAI will also allow development from those unaffiliated with the company, giving the team a rare opportunity to learn from unthought-of uses of their own technology.

More people working toward using AI in helpful, productive ways is exactly what Musk and Altman desire.

The freedom to run, study, redistribute and improve upon a service without payment or permission is the very essence of what makes something open-sourced. The Linux operating system is perhaps the most well-known open-sourced project, as well as the best example of the model’s potential for success. Linux powers 98.8 percent of the world’s supercomputers, 36.72 percent of web servers and 53.96 percent of mobile devices (most commonly through the Android operating system).

Businesses interested in SaaS, or software as a service, have much to gain from an open-sourced opportunity with AI, especially in the fields of customer service and deep learning. Software that can help customers, as well as track and predict trends, will likely be the standard in the coming years. Engineers and programmers for these companies can help develop specialized uses of artificial intelligence that the OpenAI team might skip over. More people working toward using AI in helpful, productive ways is exactly what Musk and Altman desire.

Remaining uncertainty

Sadly, Orville Wright lived to see his creation used in massive air battles and bombings, including those of Hiroshima and Nagasaki. Understandably, Orville had mixed feelings about this:

“I feel about the airplane much the same as I do in regard to fire. That is I regret all the terrible damage caused by fire, but I think it is good for the human race that someone discovered how to start fires and that we have learned how to put fire to thousands of important uses.”

Whether it’s fire, electricity, flight, robotics or artificial intelligence, there are benefits and dangers that come with all human developments. Accepting this inevitability and attempting to catalyze the beneficial elements of an advancement is a unique, but risky tactic.

Elon Musk is dropping a match. Exactly what catches fire and whether the benefits outweigh the damages is a mystery we will watch unfold in the coming years.