This famous roboticist doesn’t think Elon Musk understands AI

Earlier this week, at the campus of MIT, TechCrunch had the chance to sit down with famed roboticist Rodney Brooks, the founding director of MIT’s Computer Science and Artificial Intelligence Lab, and the cofounder of both iRobot and Rethink Robotics.

Brooks had a lot to say about AI, including his overarching concern that many people — including renowned AI alarmist Elon Musk — get it very wrong, in his view.

Brooks also warned that despite investors’ fascination with robotics right now, many VCs may underestimate how long these companies will take to build —  a potential problem for founders down the road.

Our chat, edited for length, follows.

TC: You started iRobot when there was no venture funding, back in 1990. You started Rethink in 2008, when there was funding but not a lot of interest in robotics. Now, there are both, which seemingly makes it a better time to start a robotics company. Is it?

RB: A lot of Silicon Valley and Boston VCs sort of fall over themselves about how they’re funding robotics [now], so you [as a founder] can get heard.

Despite [investors who say there is plenty of later-stage funding for robotics] , I think it’s hard for VCs to understand how long these far-out robotics systems will really take to get to where they can get a return on their investment, and I think that’ll be crunch time for some founders.

TC: There’s also more competition and more patents that have been awarded, and a handful of companies have most of the world’s data. Does that make them insurmountable?

RB: Someone starting a robotics company today should be thinking that maybe at some point, in order to grow, they’re going to have to get bought by a large company that has the deep pockets to push it further. The ecosystem would still use the VC funding to prune out the good ideas from the bad ideas, but going all the way to an IPO may be hard.

Second thing: On this data, yes, machine learning is fantastic, it can do a lot, but there are a lot of things that need to be solved that are not just purely software; some of the big innovations [right now] have been new sorts of electric motors and controls systems and gear boxes.

TC: You’re writing a book on AI, so I have to ask you: Elon Musk expressed again this past weekend that AI is an existential threat. Agree? Disagree?

RB: There are quite a few people out there who’ve said that AI is an existential threat: Stephen Hawking, astronomer Royal Martin Rees, who has written a book about it, and they share a common thread, in that: they don’t work in AI themselves. For those who do work in AI, we know how hard it is to get anything to actually work through product level.

Here’s the reason that people – including Elon – make this mistake. When we see a person performing a task very well, we understand the competence [involved]. And I think they apply the same model to machine learning. [But they shouldn’t.] When people saw DeepMind’s AlphaGo beat the Korean champion and then beat the Chinese Go champion, they thought, ‘Oh my god, this machine is so smart, it can do just about anything!’ But I was at DeepMind in London about three weeks ago and [they admitted that things could easily have gone very wrong].

TC: But Musk’s point isn’t that it’s smart but that it’s going to be smart, and we need to regulate it now.

RB:  So you’re going to regulate now. If you’re going to have a regulation now, either it applies to something and changes something in the world, or it doesn’t apply to anything. If it doesn’t apply to anything, what the hell do you have the regulation for? Tell me, what behavior do you want to change, Elon? By the way, let’s talk about regulation on self-driving Teslas, because that’s a real issue.

TC: You’ve raised interesting points about this in your writings, noting that the biggest worry about autonomous cars – whether they’ll have to choose between driving into a gaggle of baby strollers versus a group of elderly women – is absurd, considering how often that particular scenario happens today.

RB: There are some ethical questions that I think will slow down the adoption of cars. I live just a few blocks [from MIT]. And three times in the last three weeks, I have followed every sign and found myself at a point where I can either stop and wait for six hours, or drive the wrong way down a one-way street. Should autonomous cars be able to decide to drive the wrong way down a one-way street if they’re stuck? What if a 14-year-old riding in an Uber tries to override it, telling it to go down that one-way street? Should a 14-year-old be allowed to ‘drive’ the car by voice? There will be a whole set of regulations that we’re going to have to have, that people haven’t even begun to think about, to address very practical issues.

TC: You obviously think robots are very complementary to humans, though there will be job displacement.

RB: Yes, there’s no doubt and it will be difficult for the people who are being displaced. I think the role in factories, for instance, will shift from people doing manual work to people supervising. We have a tradition in manufacturing equipment that it has horrible user interfaces and it’s hard and you have to take courses, whereas in consumer electronics [as with smart phones], we have made the machines we use teach the people how to use them. And I do think we need to change our attitude in industrial equipment and other sorts of equipment, to make the machines teach the people how to use them.

TC: But do we run the risk of not taking this displacement seriously enough? Isn’t the reason we have our current administration because we aren’t thinking enough about the people who will be impacted, particularly in the middle of the country?

RB: There’s a sign that maybe I should have seen and didn’t. When I started Rethink Robotics, it was called Heartland Robotics. I’d just come off six years of being an adviser to the CEO of John Deere; I’d visited every John Deere factory. I could see the aging population. I could see they couldn’t get workers to replace the aging population. So I started Heartland Robotics to build robotics to help the heartland.

It’s no longer called Heartland Robotics because I started to get comments like, “Why didn’t you just come out and call it Bible Belt Robotics?” The people in the Midwest thought we were making fun of them. I should have now, in retrospect, thought of that a little deeper.

TC: If you hadn’t started Rethink, what else would you want to be focused on right now?

RB: I’m a robotics guy, so every problem I think I can solve has a robotics solution. But what are the sorts of things that are important to humankind, which the current model of either large companies investing in or VCs investing in, aren’t going to solve? For instance: plastics in the ocean. It’s getting worse; it’s contaminating our food chain. But it’s the problem of the commons. Who is going to fund a startup company to get rid of plastics in the ocean?  Who’s going to fund that, because who’s going to [provide a return for those investors] down the line?

So I’m more interested in finding places where robotics can help the world but there’s no way currently of getting the research or the applications funded.

TC: You’re thought as the father of modern robotics. Do you feel like you have to be out there, evangelizing on the part of robotics and roboticists, so people understand the benefits, rather than focus on potential dangers?

RB: It’s why I’m right now writing a book on AI and robotics and the future — because people are getting too scared about the wrong things and not thinking enough about what the real implications will be.