At a symposium in Washington DC on Friday, DARPA href="https://www.darpa.mil/news-events/2018-09-07" target="_blank" rel="noopener">announced plans to invest $2 billion in artificial intelligence research over the next five years.
In a program called “AI Next,” the agency now has over 20 programs currently in the works and will focus on “enhancing the security and resiliency of machine learning and AI technologies, reducing power, data, performance inefficiencies and [exploring] ‘explainability'” of these systems.
“Machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly, but ultimately impossible,” said director Dr. Steven Walker. “We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.”
Artificial intelligence is a broad term that can encompass everything from intuitive search features to true machine learning, and all definitions rely heavily on consuming data to inform their algorithms and “learn.” DARPA has a long history of research and development in this space, but has recently seen its efforts surpassed by foreign powers like China, who earlier this summer announced plans to become an AI leader by 2030.
In many cases these AI are still in their infancy, but the technology — especially machine learning — has the potential to completely transform not only how users interact with their own technology but how corporate and governmental institutions use this technology to interact with their employees and citizens.
One particular concern with machine learning is the potential bias that can be incorporated into these systems as a result of the data they consume during training. If the data contains holes or misinformation, the machines can come to incorrect conclusions — such as which individuals are “more likely” to commit crimes — that can have devastating consequences. And, even more frighteningly, when organically coming to these conclusions the “learning” a machine is obscured in something called a black box.
In other words, even the researchers who design the algorithms can’t quite know how machines are reaching their conclusions.
That said, when handled with care and forethought, AI research can be a powerful source of innovation and advancement as well. As DARPA moves forward with its research, we will see how they handle these important technical and societal questions.