I was at the show today talking to some folks about the current crop of military robots from people like Foster-Miller and even iRobot. These are robots deployed by the army to handle bomb detection, room clearing, and all the nasty stuff you don’t want to send humans in for. For years, politicians have been deriding these things as remote controlled toys rather than viable battlefield weapons.
Foster-Miller and the robot companies were upset at those claims, consistently stating that they were building battlefield robots, not glorified RC cars. These were intelligent machines with valuable skills. This back-and-forth went on for a few years. As the tactics changed in Iraq, however, politicians changed their tune and began clamoring for these beasts to be armed.
This brings out another problem: how to arm a semi-autonomous device? By arming a robot, you basically wade into some heady ethical territory. Who is responsible? Who is in control? Can these robots make decisions on the battlefield? The answer? Call it a remote controlled robot and say the operator is behind everything. So, from high-minded rhetoric comes a low-end solution: pull the ethics out of the equation by redefining your terms.