You’ve surely seen all of those videos of robots opening and walking through doors. The dirty little secret is that most or all of them involved a good bit of human hand holding. That can come in the form of manual remote guidance wherein a user remotely controls the process in real-time or a guided training, in which the robot is walked through the process once so it can mimic the activity exactly the next time.
New research from ETH Zurich, however, points to a model that requires “minimal manual guidance.” It’s effectively a three-step process. First the user describes the scene and action. Second, the system plans a somewhat convoluted route. And third, it refines the route into a minimal viable path.
Want the top robotics news in your inbox each week? Sign up for Actuator here.
“Given high-level descriptions of the robot and object,” the research paper explains, “along with a task specification encoded through a sparse objective, our planner holistically discovers: how the robot should move, what forces it should exert, what limbs it should use, as well as when and where it should establish or break contact with the object.”
The system is broken down into two main categories: object-centric and robot-centric. The former involves tasks like opening a door or a dishwasher, whereas the latter applies to things like moving the robot around objects.
The team says the system can be adapted for different form factors, but for the sake of simplicity, these demos are executed on a quadruped – specifically ANYbotics’ ANYmal. The startup was spun out of ETH Zurich and has therefore become a favorite for these sorts of research projects.
The team adds that the work can serve as a stepping stone to “developing a fully autonomous loco-manipulation pipeline.” So, one step closer to systems that can open doors without any sort of human intervention.