3D Gesture Control Is An Area Of Focus On Innovation We Likely Don’t Need Or Want

Next Story

Barnes & Noble Undercuts Amazon, Kobo By Dropping NOOK Simple Touch GlowLight To $99

Minority Report was an enjoyable action flick, but it may hold the blame for getting the idea stuck in our collective heads that 3D gesture control is the next frontier for computing. The Kinect from Microsoft helped further this idea around as well, with a pretty good (though highly limited regarding needed space, applications, etc.) gesture experience. But a lot of startups and other companies are chasing this carrot – and it begs the question of whether there’s even a carrot to chase.

Maybe the most headline-grabbing of those going after the gesture control birdy is Leap Motion. The company raked in lots of pre-order interest for its device, which uses infrared tech to track finger and hand movements in 3D space and them map those to controls for apps on a computer. But then it arrived, and the reality was nothing like people had imagined, even after the device delayed its release for an extended beta to amp up the consumer user experience.

Leap Motion had good reason to go back to the drawing board: there’s a huge risk with this kind of device because when you aren’t just blown away by a device like this, it ends up in a drawer and no one ever uses it again. Unfortunately for the company, that’s likely the fate of a lot of their controllers, I realized after a couple of weeks of using one.

Early reviews were not very kind to the Leap Motion, but really a lot of them may have been over-generous. The controller is impressive enough during its demo when it’s showing you the finger points and hand model skeleton its detecting, but already it’s apparent that the detection is finicky. The controller is finicky in its appraisal, and requires your hands to occupy a sweet spot relative to the gadget itself to work really well.

Even when you’re in that zone, the problems don’t end. How each app uses gesture input varies, and things like web browsing with it are a definite pain. In the end, the fact is that on balance you get more frustration than pleasure out of the experience, and that’s not good for long-term adoption.

The experience of Leap Motion is flawed enough that it makes me wonder whether gesture control is actually something that it’s even possible to get right. Minority Report painted an idealized picture of how that might look, but it is, after all a work of fiction, and think about what the Tom Cruise character is actually doing in many of those scenes; wouldn’t it be easier to work with a traditional multimonitor setup and keyboard and mouse to accomplish the same thing?

There are a lot of people looking at gesture control right now, including Waterloo’s Thalmic Labs with its MYO armband, the new Haptix Kickstarter, and pmdtechnologies from Germany with their CamBoard pico. Microsoft is also refining and improving upon its Kinect for the upcoming Xbox One console.

Gesture input is a tempting area of focus, since it has clearly been a focus of lots of imaginative work for speculative and science fiction. Kinect and Wii showed us that large groups of people could enjoy that kind of device interaction, but those are in very specific contexts. Even if executed well, I’m not sure any solution is going to be anything other than a niche curiosity – we’ll probably see input take other, unexpected courses of evolution instead. They MYO and others could still prove me wrong (and I hope it does), but if you’ve got a farm to bet, I wouldn’t bet it on a gesture control revolution.