Dueling AIs compete in learning to walk, secretly manipulating images and more at NIPS

A little competitive spirit is often just the thing when you want to spur some innovation, and that’s exactly the idea at the Neural Information Processing Systems conference, where AI-like systems will be competing in a variety of tasks. Teams are already setting their systems against one another, imitating how muscles work when we walk, answering pub quiz questions and subtly manipulating images.

It’s a new initiative at NIPS, and 23 proposed competitions were narrowed down to five. The conference isn’t until early December, but the contests are already in full swing, as it’d be difficult for competitors to just whip up a system from scratch while on location.

Each contest is run independently and some have special sponsors and cash prizes.

Learning to Run: Probably the most visually interesting contest, this one has to do with simulating how the brain controls our muscles and bones during walking motion. Simulated physiology and physics, plus obstacles (slippery floor, steps, weak muscles) add to the challenge. The idea is not just to build an AI that knows how to walk, but to offer insight into how surgery may affect gait in people with afflictions like cerebral palsy. You can read more about it at this Stanford news release, and the leaderboard gifs are pretty funny. Amazon is offering $30,000 in AWS credits for the purse.

Adversarial Attacks and Defenses: We’ve all seen the neural networks that have been trained to identify certain types of pictures — faces, cats, landscapes and so on. Because these rely on what we might think of as a rather strange logic involving all kinds of low-level data, there are ways to fool them into thinking what they see is something totally different, while keeping the image more or less intact to human eyes. This competition is about creating and defending against such malicious image manipulation.

Conversational Intelligence Challenge: Like previous conversational AI challenges, in this one the goal is to act as human as possible. The bots are connected with random human evaluators, and both are given the text of a recent news or Wikipedia article and can discuss it for as long as they like. Plenty of rounds of evaluation will be run through, the final one occurring at NIPS. First place gets 10 grand! Facebook, ever on the lookout for advances in the chatbot space, is a “platinum sponsor,” while Battlefield alumnus Maluuba is a “Silver Partner.” Whatever those mean.

Human-Computer Question Answering: Here competitors are building a sort of miniature Watson, or at least the version of Watson that schooled everyone in Jeopardy. Systems will be given quiz-type questions (Who was the fourth emperor of Rome?) one word at a time, and whichever can answer the fastest (i.e. using the fewest words) gains points — or loses them, of course, if the answer is incorrect. Sounds like there will be a human-computer showdown at NIPS: “We reserve the right to combine systems for our exhibition match against a human team.”

Classifying Clinically Actionable Genetic Mutations: If you knew which genes in a cancerous tumor were causing it to grow and which were just trash, you could target those genes and possibly stop it from spreading. But this is a difficult, time-consuming process usually done by experts. But with access to those experts’ annotations of thousands of mutations, it’s hoped that a machine learning system will be able to perform the task too — or at least help narrow the search. The $10,000 grand prize is offered by Memorial Sloan Kettering Cancer Center. There are already 685 teams signed up!

Of course, we won’t know the outcomes of these contests until December, but you can follow along or join up and take part in the discussions if you want — they all appear to be free to enter.