Personify Live Uses Microsoft Kinect Or Other Depth Cameras For A Video Conferencing Service That Layers Presenters Over Content

A video conferencing startup called Personify, Inc. (formerly Nuvixa) is launching its first product next week, which will allow users to take advantage of the motion sensing technology in Microsoft Kinect and other depth cameras in order to overlay video of themselves on top of their presentations in real-time. Using the Personify Live software, users can present on top of their content, gesturing to it as they go along. This content can include a slideshow, computer desktop screen, streaming video, or anything else you can run on your computer.

Personify co-founder Sanjay Patel, now a professor at the University of Illinois at Urbana-Champaign, has been involved in the startup industry for years. Most notably, he was CTO at Ageia Technologies, makers of the PhysX physics engine used in game development, acquired by Nvidia in 2008. Following the sale of the company, a colleague of his introduced him to the depth cameras, which he had been researching since 2004.

“They were these big, ‘research-y,’ industrial-looking machines,” says Patel. “But they eventually became the technology that’s inside Kinect.”

Upon seeing these cameras, Patel began to think about their potential use cases. “If we could get these to be cheaper somehow, and smaller, and integrated into a PC environment,” he thought to himself, “we could really improve the way people communicate with video. That was really what the passion of our founding team was,” he adds. After putting together the initial team in 2009, Personify, then Nuvixa, began to develop software to use depth cameras with video conferencing. Microsoft Kinect, of course, was known as Project Natal at the time Personify got off the ground, as Kinect’s technology wasn’t put on the market until November 2010.

The first beta test began in October of last year, offering access to a limited number of pilot users, which include SAP, McKinsey & Company, and LinkedIn, to name a few. Then a freemium product, the company had around 2,000 sign-ups for its tests, and it’s now in the process of converting those users to paying customers. Pricing starts at $20/user. Another package being offered includes 3 months of Personify’s service for free and Asus’ Xtion Pro Live depth camera for $199.

Unlike online meetings and web conferencing today, where video tends to not be as heavily used, sticking the presenter off to the side in a small box, Personify Live is designed to allow its users to better communicate with gestures, eye contact and other human-to-human interactions – more like how the presentation would appear in real world, face-to-face meetings.

Pilot testers have adopted the technology for inside sales, webinars, web app and online demos, training, education (both K-12 and higher ed) and more. However, Personify’s primary focus is on enterprise sales, where the technology works alongside the current in-house solutions including WebEx, GoToMeeting, and Microsoft Lync. It can also work with Join.me, but only if using Personify’s screenshare, not Join.me’s. “Most companies already have a web conferencing suite they use,” says Patel. “We don’t want them to stop using those things. Our product layers on top of those products,” he explains.

Now at team of around 20, split evenly between Illinois and Vietnam, Personify’s other co-founders include Dennis Lin, Minh Do, and Quang Nguyen. The company is also backed by $3.5 million in Series A funding from AMD Ventures, Liberty Global Ventures, Illinois Ventures, and Serra Ventures.

An interesting note about Liberty – they’re the world’s second largest cable operator, based in Europe. The company sees a potential for Personify in the living room, where users could one day video call each other over their TV’s. Instead of Skype or some other web chatting product (like what’s built into Kinect today), Personify could enable the overlay of just the person calling on top of the live TV show or movie being watched. That future isn’t as far off as you may think. Patel says that they should have demos of this by the end of next year, and possibly deployments by 2014.

In addition, he sees this as possibility on mobile by that time, too. Depth cameras on mobile?, we asked. Do you have direct knowledge of that?, we wanted to know.

“I can’t  really start talking about this stuff now,” says Patel. “But it’s happening. That’s all I can say.”