The top artificial intelligence stories of 2016

SEE TIMELINE

The top artificial intelligence stories of 2016

Artificial intelligence is a phrase on everyone’s lips, but when you think about it, it’s rather hard to define, isn’t it? Turns out people have been trying for a long time, and it has as much to do with how we define thinking as what a computer can actually do. This is an introduction to some of those concepts, though by no means does it settle any of the myriad problems that computer scientists, psychologists and philosophers have been beating their heads against for decades — or centuries. This is a run-through of the state of AI in 2016.

1/15

Cognitive science pioneer Marvin Minsky passes away

The year started off on a down note: Marvin Minsky, one of the most important minds in artificial intelligence and computing of the 21st century, passed away on January 24 at the age of 88. Minsky was a pioneer in both computer science and cognitive science, because his theories on the computational nature of the brain applied equally to both. “The problem of intelligence seemed hopelessly profound,” he told The New Yorker in 1981. “I can’t remember considering anything else worth doing.”

2/15

Google DeepMind's AlphaGo system beat world champion Lee Sedol in Go

Games like checkers have been child’s play for computers to win for decades, and even in the vastly more complex chess they have proven to be superior opponents. But Go, with its huge board, subtle strategies and sheer number of moves to chose from, remained the province of human masters. Until March, that is, when Google DeepMind’s AlphaGo system beat world champion Lee Sedol in 4 out of 5 matches. The victory wasn’t in brute force of calculation — Go is too complex for that — but in making a system that could pare down the problem space in a human way.

3/15

Nvidia announces a supercomputer aimed at deep learning and AI

AI ain’t simple, theoretically or computationally, and it takes a whole lot of processing to train and run these models. Nvidia and others have stepped up to the task of providing hardware well-suited for these purposes — in the case of the DGX-1, that’s eight heavy-duty GPUs stuck onto the same board, with 7 terabytes of storage for those giant data sets. It probably runs Crysis pretty well, too, but at $129,000 that’s probably not the best use for it.

4/15

Tech imitates art in a 3D-printed fake Rembrandt based on the old master’s style

Computers may only do what we tell them to, but within that limitation they can still be remarkably creative. This fascinating project had a machine learning system take in the entirety of Rembrandt van Rijn’s paintings, learning the colors, composition, demographics, dress, feature geometry and so on. The system then created a new Rembrandt-like painting from scratch, informed by that data. It’s quite convincing, if slavishly imitative — though, of course, that was the intention. The Next Rembrandt project is indicative of how AI and machine learning are really starting to break out of the lab and into real-world pursuits.

Skip
Advertisement
5/15

XPRIZE launches AI 2020 competition with IBM Watson

Interest in artificial intelligence was so great this year that XPRIZE, the same nonprofit that helped get SpaceShipOne off the ground in 2004 decided to create a four year prize for it. More than 1,000 people have registered to apply new technologies to health, climate, transportation and even education. Though IBM is sponsoring the event’s $3 million purse, it stands to gain a lot by positioning itself, and the Watson brand, at the forefront of AI.

6/15

Google’s WaveNet uses neural nets to generate eerily convincing speech and music

Another Google project tackled the question of how to generate realistic speech at about as low a level as you can get — by recreating it sample by sample, 16,000 times per second. This fascinating research produces excellent results, and can be trained to produce different accents or languages quite easily. It even produces lovely gobbledigook that must be what those languages sound like to someone who doesn’t speak them. Lastly, if you train it with piano instead of words, it turns out little numbers that, while they aren’t quite ready for the concert hall, are certainly better than anything I’ve ever composed.

7/15

Google DeepMind open sources image captioning model in TensorFlow

A lot of these same tech giants have been releasing open source material to publicly flex their muscles and get outside developers actively involved and emotionally invested in internal projects. One of the best examples of this was Google DeepMind’s open source release of its image captioning model. The framework has achieved 93.9 percent accuracy in correctly captioning images.

8/15

Facebook, Amazon, Google, IBM and Microsoft come together to create the Partnership on AI

Despite rivalries, Facebook, Amazon, Alphabet, IBM and Microsoft came together this year to form a new Partnership on AI. The group is set to meet regularly to discuss advancements in artificial intelligence. Rather than play into public fears about AI destroying the world, the group is taking a practical approach to addressing issues of bias in machine learning frameworks and ensuring resources exist for people who want to start their careers in the space.

Skip
Advertisement
9/15

Google researchers aim to prevent AIs from discriminating

A risk when crunching huge amounts of data is that it can be hard to tell if that data has any built-in bias toward a group or category — imagine, for instance, it was data collected exclusively in cities, excluding rural populations where things may be very different. We have a tendency to trust the results of computers as factual, but machine learning algorithms aren’t calculating, they’re interpreting. Google’s “Equality of Opportunity” approach ensures that won’t happen. As machine learning creeps more and more into everyday life, this kind of inclusive philosophy is becoming more and more essential.

10/15

Facebook lays out its AI and machine learning strategy

Competition for top talent remained fierce as large tech companies struggled to get their hands on as many AI researchers as possible. Aside from seven-figure salaries, companies have been putting resources into moonshot projects to get scientists excited about starting anew in the private sector. Facebook’s ambitions for machine learning involve satellites and autonomous drones lathering the world with connectivity. The company is also betting big on machine learning to stabilize video and recognize speech in immersive virtual reality environments.

11/15

Student’s iDentifi app puts object recognition in the hands of the visually impaired

As AI leaves the lab it ends up doing work for us in banal ways — picking the next track — and important but easily overlooked ways, like this one. Object recognition is old hat by now, with companies like Facebook and Google competing over fractions of a percent in accuracy. But one virtue of being old hat is that people outside tech companies’ labs can start using it for other things — like high school student Anmol Tukrel’s app that identifies everyday objects for the visually impaired. That a student with limited resources can put such a thing together in his spare time — and chooses to — is reassuring.

12/15

Airspace Systems’ ‘Interceptor’ can catch high-speed drones all by itself

We also saw some truly awesome applications of computer vision. While autonomous cars stole most of the headlines in this space, others pushed the technology into unique hardware and IoT devices. San Leandro-based Airspace Systems built an autonomous drone that can identify other threatening drones in the sky and capture them in kevlar netting, without a human on the other end of the trigger. Airspace’s “Interceptor” drone then carries the captured craft back to a designated, safe, location. The team took a novel approach to training its machine learning framework, using a 3D flight simulator to give the system experience before using real-world imagery and field testing.

Skip
Advertisement
13/15

Google’s AI translation tool seems to have invented its own secret internal language

Perhaps the deepest and strangest development this year was stumbled on by Google researchers (yes, again) working on efficient multi-language translation AI. They found that a system which knew how to translate from Korean to English, and from Japanese to English, but not Korean to Japanese, was able to do the latter without using English as an intermediary. It had formed, unknown to the researchers, what amounts to an “interlingua,” a deeper, internal representation of the concepts shared by the words in various languages. It’s more of a philosophical advance than a practical one, but it’s amazing all the same.

14/15

OpenAI’s Universe is the fun parent every artificial intelligence deserves

In addition to models, tech companies and nonprofits alike have been giving back to the ecosystem by releasing training materials so that anyone can test their algorithms on their own. OpenAI, the nonprofit aiming to democratize AI, dropped a new tool it calls “Universe” to expedite progress toward artificial generalized intelligence. Specifically, developers can train and test their creations on video games, applications and websites. Along this vein, other companies like Udacity released libraries of driving footage for engineers who want to experiment with self-driving cars without the resources of companies like Tesla and Uber.

15/15

WTF is AI?

Advancements in GPU performance and cloud computing have made much of this year’s progress possible. But, at the core of everything, machine learning is pulling the strings. Machine learning is not new, but its prominence is at an all-time high. Never before in history has there been so much public interest in the field — Stanford is graduating more than 400 graduate students this year with the specialty. If you’re curious what all the buzz is about, a great place to start is this overview piece.