Samsung Working With Startup Partners To Add Gesture-Based Smart Home Control To TVs

Samsung is reportedly (via WSJ) preparing for a smart home control interface that uses your smart TV to recognize basic hand gestures, directed at the objects you actually want to control. So, for example, it would allow you to point at a lamp to turn it off or on, or to other nearby objects to affect them in different ways.

The shift is a bit of an extension of what Samsung already offers, which is basic gesture control of their smart televisions, but turning the TV itself into a sort of smart home hub. The report from the WSJ claims that Samsung is in talks with VTouch, a tech startup that creates the gesture control software, which explains how its so-called “virtual touch” interaction software would replace on-screen controls and manual reports with the finger tip and pointing.

Typically, these control interfaces haven’t been huge successes with users, but that’s down to performance being inconsistent and not entirely accurate. VTouch tells the WSJ that its version of gesture interaction is better because it takes into account not only the hand motions themselves, but also a user’s eye movements, to help prevent detection of accidental input and better determine user intent.

VTouch says that its tech should make its way to shipping TV sets by 2016, and those will be Samsung sets if the deal goes through and these talks continue. The ability to control devices that aren’t necessarily themselves ‘smart,’ through the use of a central hub like a camera-enabled smart TV makes a lot of sense, in terms of ushering in the smart home to the general consumer population within the next few years.

Samsung won’t be alone in chasing this evolution of consumer tech, either. Microsoft is already doing much more with gesture control and user recognition with the Kinect on the Xbox One, and others are looking to make sure devices with embedded cameras and other sensors are more aware of the users who are interacting with them. That might not make every user comfortable, but it does mean there’s a large opportunity for unlocking the current limits on gesture control models.