Google’s adversarial AIs could lead to less reliance on real-world data

One of the biggest challenges facing the development of AI is that it requires a huge amount of human input, both in terms of the involvement of people when it comes to identifying and inputting data up front, and in terms of the nature of data sets required to even make training AI systems possible to begin with. Google AI research Ian Goodfellow, who recently headed back to Google Brain after a stint at the Elon Musk-backed OpenAI, hopes to address both those issues through an approach to AI that involves pitting one neural network against another.

The concept isn’t new: Facebook published a paper co-authored by its head of AI research Yann LeCunn and AI engineer Soumith Chintala last June, in which they describe using generative adversarial networks (GANs) to eventually enable unsupervised learning, aka machine learning that takes place without any human involvement. Goodfellow pioneered this idea, however, proving its basic viability after a heated (and boozy) debate with some University of Montreal academic colleagues, as Wired reports.

In essence, the nature of the system includes two opposing neural networks that inform one another through their opposition: the first tries to create something synthetic, for instance a realistic image of a dog, and the other criticizes its attempts, trying to spot the fakes and pointing out where the first system has failed or fallen down. Through a repeated process of trial and criticism, the system doing the generation can improve its performance in unexpected ways, gradually bettering its attempts.

Using GANs, AI researchers could not only decrease human involvement in signal correction to enable systems like image generators to get better over time – they could also minimize the amount of real data used to generate useful AI and machine learning tools in sensitive areas, including health care. Google’s own DeepMind has a partnership with the NHS that involves controversial data sharing deals; GANs could prove a mechanism that enables the production of entirely fabricated patient data sets that are just as useful to training AI as the real thing.

Goodfellow being back at Google could mean more competition (and collaboration) among the big tech firms in pursuit of GANs, which in turn could lead to significant improvements in the speed at which AI develops in the future. And if it also leads to greater privacy assurances for individuals who stand to potentially benefit from those developments, that could be a win for everyone involved.