Apple Patents Learning Computer Vision For Gesture Control

Apple has a new patent (via AppleInsider) for 3D gesture control, specifically describing the tech used to help a computer identify hand motions made by a user. The patent goes into detail about how the system can not only recognize gestures, but learn them so well that it can even spot them when part of the hand making the gesture is blocked or not visible by the camera, leading to greater accuracy overall.

Apple’s tech would allow Kinect-style recognition to be more forgiving of less-than-ideal conditions, meaning it would make gesture interaction theoretically less painful for users, and therefore more likely to be used at all. The key innovation Apple made with the iPhone’s interaction model was getting touch-based input right – its capacitive screens and rigorously engineered touchpoint response was completely unlike the kludgy resistive touch-based experiences customers were used to.

In 2013, Apple acquired PrimeSense, the company which powered much of the tech that went into the original Kinect sensor. Some speculated at the time that Apple might be interested in using PrimeSense tech to add gesture-based input to Apple TV, among other possible uses. Apple successfully transferred PrimeSense patents to itself last year, and this new one contains key ingredients for improving the accuracy and efficacy of gesture recognition over time.

Apple first filed for this patent in March 2013, and the document credits former PrimeSense employees as its creator.