Aquifi Changes The Computing Interface With A Wave Of The Hand

After three years in stealth mode, the magicians behind the motion capture technology that powers Microsoft’s Kinect, are launching Aquifi, the next step in their development of motion capture technology.

With backing from Benchmark Capital, one of Silicon Valley’s top tier venture capital firms, and a host of heavy-hitting individual investors, Palo Alto, Calif.-based Aquifi has spent the past three years developing software that uses commodity sensor equipment — like the cameras and video components in smart phones and tablets — to recognize and interpret gestures so that users can have touchless interactions with their devices.

Think of it like the technology in Minority Report or Iron Man .

“Gesture control is inflexible because it uses custom hardware and custom sensors. Because these things are customized they are very high cost [and] the interface is focused on the machine rather than the user,” says Nazim Kareemi, Aquifi’s chief executive. “We spent a lot of time thinking about our experience [at Canesta] and said let’s focus on what would be an ideal solution: the machine should adapt to you and should react to the way you are trying to communicate with it.”

5449150604_fcddaaf35e_z

Kareemi’s vision — and Canesta’s success — was persuasive enough to attract Benchmark and angel investors including Sling Media founder Blake Krikorian, and Rambus co-founder Mike Farmwald, to invest $9 million in the company in 2012.

“The vision that Aquifi’s founders saw a decade ago for 3-D tracking and its consequences is becoming a reality today,” said Bruce Dunlevie, Benchmark Capital Partner in a statement. “The team has learned a lot from the collective experience of its members, and understands what is needed to make a fluid experience available to everyone, on all their devices.”

Applications for the technology are ubiquitous, but the first use-case that its founder Kareemi points to is being able to interact with content without having to hold a device. “You and the machine don’t have to be welded together,” he says. If someone is making dinner with a recipe on a tablet, the device can be across the kitchen from the stove and the chef can still scroll up and down the recipe onscreen, while still chopping onions.

“Because we have image sensors that can discern the depth between a user and a device, the size of any content can be adjusted depending on distance from the screen,” Kareemi says.

That’s only the most basic iteration of what the technology can do, according to Kareemi. Beyond controlling devices through more nuanced and natural gestures, rather than through a specific set of stylized actions, users could enable devices to auto-lock based on facial recognition technologies, or turn on and off depending on whether a user is looking at them.

Other applications the company envisions include augmented reality wearable applications using smartphone three dimensional object scanning and room mapping and hands-free, safe driving applications combined with voice commands.

Meanwhile, the biggest technology companies are hard at work embedding sensors into devices in an effort to bring perceptual computing to the mass market. Intel Capital has a $100 million investment fund focused on perceptual computing. Google has Project Tango, which is tackling the same issues of object awareness.

“We’re starting to see awareness of the environment outside of the touchscreen itself,” says Kareemi.

Aquifi expects to start engaging developers that will launch in the third quarter of 2014, and envisions hundreds of applications beyond any basic user interface. “There will be sample programs where people can see what this system is capable of.” The company expects the first Aquifi-enabled devices to be on the market by 2015.

Photo via Flickr user Open Exhibits.