Swiss robo-DJ demonstrates future of AI-human symbiosis

q1
The headline makes it sound a little more sinister than it is, but that’s really the gist of it. QB1, a robot created by Swiss group OZWE, is essentially a next-generation music playing machine. While things like Pandora and Genius playlists are changing the way people interact with their music within the confines of the traditional OS, OZWE wanted to change the way we interacted with our entertainment devices in the first place. It’s aware of its surroundings in 3D, recognizes faces and pictures, and can interpret gestures. I was skeptical at first, but on reflection, the QB1 seems like a really interesting and powerful idea.

The QB1’s screen turns to face you, but that’s the limit of its movement. It also shows a sort of shadow version of you and your surroundings, which helps you make gestures on-screen. But although its stated capabilities are interesting, it’s the implied capabilities which seem more important. Think of the convenience of multi-touch gestures applied to all your media, and not limited to a small patch on your laptop. Raise your hand and make your fingers into a shelf, then lower it — the volume decreases. Spin your finger around clockwise to fast forward, counter-clockwise to rewind. Speak the name of a song, or the track number, or hold up an album cover to play it. This from anywhere in the same room as the QB1 — or whatever successor makes good on these ideas.

img_3964_s
Image credit: OZWE

The catch is that these interactions are difficult to simply program. Human behavior and the minute details of gestures are unpredictable and must be shown to an AI like QB1 over and over, not just coded in. And think of handwriting or voice recognition: even after “training” a program for weeks, you’ll still get mistakes due to the limitations of the system. But the only way they can get better, like us humans, is to make mistakes and learn from them. Therefore, OZWE is looking for volunteers to have a QB1 in their home for a while so it can learn the basics of human interaction — and probably to work out some bugs before launch without losing any sales.

This is something we’ll be seeing more of soon as AI begins to creep into what were once dumb machines. With things like Pandora and Genius replacing traditional shuffle (which itself was a marginally more “intelligent” replacement for, say, a 5-CD changer), we are witnessing the beginning of the AI-ification of our devices. Although it is a step up in sophistication in the technology itself, it’s a simplification of the interactions we have with the machines. Consider the OS you’re running, be it XP or OS X or Linux — they’re far more complicated and “intelligent” than their predecessors, but far more accessible. Natural interaction with devices is the future, but the more abstract or fuzzy the interactions (is user gesticulating or saying “next track”?), the more data needs to go into them, and the more they need to harvest from genuine human interaction. It’s nice to feel needed.

I know I’m dragging this out into a whole thing, but awareness of surroundings and users is one of the major interface changes that hardware is going through. Light-sensitive keyboards, smile shutters, and multi-touch are just the baby steps. While the QB1 is far from the vanguard of this AI in life movement, their robot has captured it pretty well by demonstrating the possibilities and limitations of such complicated devices.

There’s more info on the QB1 at OZWE’s site.

[via CNET]