Interesting video of improved robot vision and obstacle navigation

This is a short video by the NYU splinter of a DARPA-funded project that aims to improve the way robots perceive the world around them (and most importantly, in front of them). As the video notes, the resolution (both temporal and spatial) of current robots’ visual systems is very limited due to data bandwidth and CPU limitations. Consequently, it cannot process its path beyond about 12 meters. This is troublesome in cul-de-sacs and other tricky situations in which the robot cannot see far or well enough to determine a route. Several programs at different universities and companies are taking a hard look at this problem.

image is theirsThe LAGR (Learning Applied to Ground Robots) project aims to change the way the robot sees its environment, first by splitting it into a near, immediately navigable area and a far, future-navigable area. Objects in the “far” area are then “learned” by the robot as being obstacles or traversable areas once they are discerned up close — a process which they’ve made very adaptable to different situations, and which I can only assume contributes to a constantly-improving database of object data. It then plans a rough path that avoids the larger, more distant obstacles and then fine-tunes that path when they enter its finer-resolution “near” area. It’s really a great idea and appears to work well, as you can see in the video. It also mimics the human mind’s categorization of things, in which objects are quickly and accurately (based on learned visual cues) divided into immediate, distant, and various degrees in between.

The page is quite interesting for anyone interested in robotics, cybernetics, or artificial intelligence. It’s full of videos and interesting things they’ve done. Just think: someday your four-eyed robot butler might be using this very system to bring you a space-beer. More here. And I’d like to thank the lady from the video for having an alluring voice.