CMU develops a method to improve robotic grasping of transparent objects

Picking up transparent objects is hard when you’re a robot. Many of the traditional cameras and sensors just can’t get a good enough view to tell the grasper where to go. The light from infrared cameras passes through the objects and gets scattered in the process, and depth cameras have trouble determining the proper shape without opaque surfaces.

The result of all of this is a high failure rate. It can be a particular issue among those looking to employ robots for recycling, as so many plastic and glass bottles are clear.

Carnegie Mellon this week issued some new research that could refine the process, using standard consumer cameras. In this case, the team of researchers created a color camera that’s able to determine the shape of a transparent object based on color readings. It’s still imperfect — and not as accurate as with opaque objects — but the researchers say they can grasp the clear surfaces with a much higher rate of success than previous methods.

“We do sometimes miss,” CMU assistant robotics professor David Held said in a release tied to the announcement,  “but for the most part it did a pretty good job, much better than any previous system for grasping transparent or reflective objects.”

More of the findings will be reported in a virtual robotics conference later this summer.