Google says today it’s making the machine learning technology that powers a number of its products, including Google Photos search, speech recognition in the Google app, and the newly launched “Smart Reply” feature for its email app Inbox. Called TensorFlow, the technology helps makes apps smarter, and Google says it’s far more powerful than its first-generation system – allowing the company to build and train neural nets up to five times faster than before.
For Google, that means it’s able to improve its products more quickly, the company explains.
TensorFlow was originally a project developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purpose of conducting machine learning and deep neural networks research. But the technology is applicable to a number of other domains, as well, says Google.
In more technical terms, the deep learning framework is a both a production-grade C++ backend which can run on CPUs, Nvidia GPUs, Android, iOS and OS X, as well as a Python front-end that interfaces with Numpy, iPython Notebooks, and other Python-based tooling, writes Vincent Vanhoucke,Tech Lead and Manager for the Brain Team on his Google+ profile.
Any computation that you can express as a computational flow graph, you can compute with TensorFlow. Any gradient-based machine learning algorithm will benefit from TensorFlow’s auto-differentiation and suite of first-rate optimizers, says Google.
“TensorFlow is what we use every day in the Google Brain team, and while it’s still very early days and there are a ton of rough edges to be ironed out, I’m excited about the opportunity to build a community of researchers, developers and infrastructure providers around it,” Vanhoucke says.
The goal with machine learning is to build a technology that works similarly to the human brain, but the technology is not there yet, by any means.
In Google’s blog post announcing the news, penned by CEO Sundar Pichai, he explains that by open-sourcing the technology, the hope is that it will accelerate research on machine learning that would benefit the entire community, and make the technology work better. As Pichai points out, even the best systems today struggle to do what a 4-year-old child can do – like know the name of a dinosaur after only seeing a couple of examples, or understand that the sentence “I saw the Grand Canyon flying to Chicago” doesn’t mean there’s actually a flying canyon in the air.
In addition, the company believes that TensorFlow has the ability to be useful in research to make sense of complex data, like protein folding or crunching astronomy data, for example.
TensorFlow is interesting for the way it enables researchers and developers to collaborate on machine learning tech. Instead of separate tools for each group, TensorFlow lets researchers test new ideas, and when they work, move them into products without having to re-write code. This can speed up product improvements, and of course, by giving the larger machine learning community access to now do the same, Google will also benefit from the accelerated pace of innovations that come of the open sourced tech. And that can ultimately boost Google’s bottom line as the tech is integrated into more of its products and improved.
TensorFlow can identify what’s in photos and videos, understand speech, read and understand written text (to some extent) and more.
This latter feature is what powers “Smart Reply,” a way for Google’s email app Inbox t create automatic responses to your emails for you – an easy-to-understand example of the potential for machine learning to enhance the products we use daily, like email. Smart Reply reads the content of the email, then suggests short phrases at the bottom of the screen which you can use to reply. And it learns the more you use it, understanding who you say “yes” and “no” to, for instance.
Google says TensorFlow is used today in a number of its most visible products, including image search in Google Photos, speech recognition systems, Gmail, Google Search, and more.
Google says it used its earlier system, DistBelief, developed in 2011, to demonstrate that concepts like “cat” can be learned from unlabeled YouTube images, to improve speech recognition in the Google app by 25%, and to build image search in Google Photos. DistBelief also trained the Inception model that won Imagenet’s Large Scale Visual Recognition Challenge in 2014, was used in experiments in automated image captioning, and in DeepDream.
But the company says DistBelief had limitations – “it was narrowly targeted to neural networks, it was difficult to configure, and it was tightly coupled to Google’s internal infrastructure — making it nearly impossible to share research code externally,” says the company in a separate announcement on its Research blog, written by Jeff Dean, Senior Google Fellow, and Rajat Monga, Technical Lead.
TensorFlow was designed to address those shortcomings.
“TensorFlow is general, flexible, portable, easy-to-use, and completely open source. We added all this while improving upon DistBelief’s speed, scalability, and production readiness — in fact, on some benchmarks, TensorFlow is twice as fast as DistBelief,” the announcement states.