We humans enjoy not having knives inside of us. Robots don’t know this, three laws be damned. Therefore it’s important for humans to explain this information to robots using careful training. Thankfully, the good dudes at Cornell are on the case.
Ashutosh Saxena, assistant professor of computer science, and his team have created a system for fixing robotic motions. In their demo they show the robot lifting a knife from a counter and nearly stabbing a guy. The trainer explains that stabbing is not OK and the robot begins to learn through a process of trial and error that, in the end, ensures minimal stabbage. The system uses trajectory mapping (the robot decides on three potentially un-stabby motions) and the human selects the best one and moves the robot in order to ensure minimal stabbage. From the paper:
￼ Then humans can give corrective feedback. As the robot executes its movements, the operator can intervene, manually guiding the arms to fine-tune the trajectory. The robot has what the researchers call a “zero-G” mode, where the robot’s arms hold their position against gravity but allow the operator to move them. The first correction may not be the best one, but it may be slightly better. The learning algorithm the researchers provided allows the robot to learn incrementally, refining its trajectory a little more each time the human operator makes adjustments or selects a trajectory on the touch screen. Even with weak but incrementally correct feedback from the user, the robot arrives at an optimal movement.The robot learns to associate a particular trajectory with each type of object. A quick flip over might be the fastest way to move a cereal box, but that wouldn’t work with a carton of eggs. Also, since eggs are fragile, the robot is taught that they shouldn’t be lifted far above the counter. Likewise, the robot learns that sharp objects shouldn’t be moved in a wide swing; they are held in close, away from people.
You can read their entire paper here or simply watch the amazing, non-stabbing robot below. Sadly, when the robots become TIDWRTWHUFOO we may not be so lucky – or unstabbed.