The hardest thing about watching TV is finding the remote after a long, slovenly lounge on the couch. Cube26 aims to improve on that situation by turning your TV, phone, or tablet into a face-detecting powerhouse. What does that mean? Basically, your TV or other device will know when you’re looking at it, who is in the room with you, and, more importantly, it will pause the program, call, or game when you leave the room.
Founded by Cornell grads Saurav Kumar and Aakash Jain, Cube26 is still in beta, but from what I saw it works quite well on multiple platforms. For example, in addition to the aforementioned “bathroom pause,” the system can tell when you’re talking on the phone and put other devices on hold. You can also use the system for parental control as it can recognize people in the room and plan content accordingly.
“Other players in vision control are generally focused on one specific area, for example concentrating on hand-gesture detection for TV control or face recognition or body control,” said Kumar. “We believe in making the interaction with devices as natural as possible. For example, when you want to mute the volume for a device, instead of using hand gestures to do some pre-defined pattern, how about you say ‘ssshhh!’ to make a ‘keep quiet’ gesture?”
The company is bootstrapping now and expects to have some traction in OEM hardware over the next few months. They haven’t named any hardware partners, but they were at CES to look for distributors for the technology.
The project aims to take a holistic approach to system interaction.
“For vision control to be natural we believe that the solution is to leverage a wide range of vision signals from the user – both implicit and explicit. Signals include presence detection, gesture, age and gender detection, face recognition and eye tracking, and hand gestures.”
In other words, the system works best with passive, instinctual commands. Unlike the Kinect and similar motion controllers, the system is always watching the room for changes, allowing for a more integrated experience.
The founders came together at a startup weekend organized by Microsoft. They got together for six months of development and focused from making eye-tracking systems for marketers onto “natural control” of devices.
Again, much of this tech is fairly pie-in-the-sky right now, but as the speed of embedded systems improves I don’t see why this couldn’t be embedded right into a TV or phone, thereby adding real smart features to otherwise dumb devices.