The great smartening of the idiot box continues. It was several years ago that we started seeing the first internet-connected TVs, and since then TV makers have been adding more and more slightly useful features, generally one or two per generation — it wouldn’t do to put them all out at once, of course. And while much functionality is still left to the set-top box, media player, or console, it seems inevitable that these increasingly capable display devices will integrate things we consider cutting-edge today.
Take gesture controls, for instance. Microsoft’s hit gaming peripheral, the Kinect, has made people aware of the possibilities of motion tracking and depth-sensing cameras, though it’s often hacks that really deliver on the potential. PrimeSense, who contributed much to the development of Kinect, is hoping to combine this next-gen interface with next-gen display hardware.
Speaking at GDC Europe, PrimeSense’s Amir Hoffnung described plans to supplant traditional controls, and demonstrated the flexibility of the company’s OpenNI framework by coding a basic game in under half an hour. He hopes that the open framework will help bring new and intuitive controls to increasingly powerful TVs:
“The key products in your living room are evolving. Living rooms now have connected TVs and smart TVs that can run a range of applications beyond TV shows. But all these smart TVs will need a new remote control device, because all these smart TVs need richer and deeper levels of input.”
They face stiff competition: Microsoft opened up the Kinect SDK in June after much unofficial hackery made it clear they had a potential development gold mine in hand. And Although PrimeSense has worked with Asus to produce a Kinect-like device, it’s much more expensive and not quite up to snuff.
That they’re not Microsoft gives them an advantage, though. OpenNI is open-source, and while of course that doesn’t mean it’s a free for all, it’s more likely that a company like Samsung or LG, for instance, will try playing with it. Microsoft is likely already exploring ways to expand Kinect on its own: no less than Gates himself talked up the applications in desktop computing.
Hoffnung also mentions OnLive being brought to TVs. While many are still skeptical of the service, its potential and the technical accomplishments associated are difficult to deny. If you were to combine OnLive tech with some basic casual gaming, controlled by a gesture-sensing webcam, it could simply explode. Play a little match-three or farm sim during commercials, or while waiting for your rented movie to cache? You better believe there are a tens of millions of couch potatoes who would jump at the opportunity. Well, perhaps not jump, but they would at least wave their hands around, and that’s all it takes.
This is all fairly distant speculation, though, and it depends very much on what TV makers bring to their devices. HD webcams for video chat integration, and a little more horsepower behind the screen (you need a good amount of cache and a bit of specialized hardware to do hardcore streaming and gesture tech), and PrimeSense’s dream could become an everyday experience.
Prime Sense’s concept is a device, which allows a computer to perceive the world in 3D and derive an understanding of the world based on sight, just the way humans do. On March 31, 2010, PrimeSense confirmed that it was the technology behind Microsoft’s much-touted “Project Natal”. The device includes a sensor, which sees a user (including their complete surroundings), and a digital component, or “brain” which learns and understands user movement within those surroundings. Prime Sense’s interactive device can see,...