Google’s AI chief thinks reports of the AI apocalypse are greatly exaggerated

John Giannandrea from Google had many great things to say about artificial intelligence at TechCrunch Disrupt SF. In particular, he thinks that people are too scared about general purpose artificial intelligence.

A few years ago, Giannandrea compared artificial intelligence to a 4-year-old child. Today, he revised his statement and said that it’s even worse than that. “They’re not nearly as general purpose as a 4-year-old child,” he said.

“I think there’s a huge amount of hype around AI right now. There’s a lot of people that are unreasonably concerned around the rise of general AI,” Giannandrea said. “Machine learning and artificial intelligence are extremely important and will revolutionize our industry. What we’re doing is building tools like the Google search engine and making you more productive.”

TechCrunch’s moderator Frederic Lardinois went one step further and asked Giannandrea if he was concerned about the AI apocalypse.

“I’m definitely not worried about the AI apocalypse,” he said. “I just object to the hype and soundbites that some people are making,” he added later in the interview. Sorry, Elon Musk.

I’m definitely not worried about the AI apocalypse John Giannandrea

Many people are also concerned about the fact that Google and other big tech companies are the only ones who can build competitive machine learning-powered products. Companies like Google are sitting on huge piles of data, have the capabilities to build custom processing units and can reach billions of consumers.

But Giannandrea said that Google needs to keep an open conversation with the artificial intelligence community. When it comes to data sets, Google is trying to level the playing field. “You don’t need quite as much data as you think you do. There are large data sets that are open,” Giannandrea said. “We publish data sets around videos, imagery. Other companies do the same thing.”

Even internally, Google has academic researchers working with engineers. “We have a very close relationship between researchers and product developers,” Giannandrea said.

The company also needs to share the architecture of its AI products because Google wants to avoid biases as much as possible. “We have been spending a lot of time looking at machine learning fairness,” Giannandrea said. “If your data is biased, then you build biased systems. We have many efforts at Google and research collaboration around this question of fairness in machine learning and unbiased data.”

And finally, the term artificial intelligence itself might not be the right one. According to Giannandrea, artificial intelligence doesn’t mean much. “I almost try to shy away from this term artificial intelligence — it’s kind of like big data,” he said. “It’s such a broad term, it’s really not well defined. I’ve been trying to use the term machine intelligence.”

[gallery ids="1544019,1544016,1544017,1544014,1544015"]