This robot arm’s AI thinks like we do about how to grab something

Robots are great at doing things they’ve been shown how to do, but when presented with a novel problem, such as an unfamiliar shape that needs to be gripped, they tend to choke. AI is helping there in the form of systems like Dex-Net, which uses deep learning to let a robotic arm improvise an effective grip for objects it’s never seen before.

The basic idea behind the system is rather like how we figure out how to pick things up. You see an object, understand its shape and compare it to other objects you’ve picked up in the past, then use that information to choose the best way to grab it.

Dex-Net doesn’t have the advantage of being a living person with eyes and a memory, so its creators gave it more than six million artificial 3D representations of objects and had it work out the best way, theoretically, to pick up each. In real life, the system looks at an object, compares its point cloud to those in its memory and picks what it thinks is the closest fit.

The researchers presented Dex-Net with dozens of objects it hadn’t seen before, and its chosen grip only failed one time. That suggests the system is fairly robust despite being trained on synthetic data — plus, it comes up with its candidate grip in an average of less than a second.

Dex-Net is the product of Berkeley roboticists, who are set to present the latest version of the system at a conference in July. They also plan to release the data set of objects and point clouds they’ve amassed.