SegNet is a new system created by the University of Cambridge that can “read” a road and assess various features including street signs, road markers, people, and even sky. The system looks at an RGB image of a road and then classifies different layers using a Bayesian analysis of the scene.
From the release:
The second part, interestingly, allows a vehicle to orient itself no matter what position it is. This means it can “look” at an image and asses its “location and orientation within a few metres and a few degrees.” This means the system is far better than GPS and requires no wireless connection to analyze and report a position.
You can try SegNet now by sending it down a random road in your town. The system will analyze random images of roads and tell you what it sees.
The benefit of this sort of system is that it eschews GPS entirely and instead focuses on machine learning in 3D space. It’s not quite perfect yet.
“In the short term, we’re more likely to see this sort of system on a domestic robot – such as a robotic vacuum cleaner, for instance,” said research leader Professor Roberto Cipolla. “It will take time before drivers can fully trust an autonomous car, but the more effective and accurate we can make these technologies, the closer we are to the widespread adoption of driverless cars and other types of autonomous robotics.”