Mobile Devices Will Either Have 3D Sensors Or Suffer Flat Sales

When Apple’s “Shot on iPhone 6” ad campaign covered billboards and posters in 24 cities around the world, it proved two things: The quality of mobile camera technology is astronomical, and mobile device manufacturers badly want you to know about it.

Moves like these are smart in a world where 1.8 billion digital photos are shared on a daily basis. But the startling increase in image quality as a result of the “Megapixel Wars” has gone beyond the point of being noticeable for most consumers. A well-built 8 megapixel camera is plenty for all non-professional uses. More has stopped being better.

At least when confined to the flat images smartphone cameras provide today. But Intel’s new RealSense/Project Tango phone concept shows there is still more valuable data to be captured: Depth. Small, inexpensive sensors can now see our world in all three dimensions, for which consumer applications no amount of megapixels alone could achieve.

Exploiting depth is the only logical choice mobile device manufacturers have if they are to keep their products competitive. And I’m not the only one who thinks so.

Breaking 3D Out Of Hollywood

Intel isn’t the only one at the party. Apple recently confirmed acquiring 3D-sensor manufacturer LinX for $20 million. They didn’t say why, but it’s not hard to figure out when Allied Market Research predicts 80 percent of smartphones will carry 3D sensors by 2018, earning a total of $2.02 billion by 2020. With depth, a computer can understand the size, shape and distance of all objects within its field of view. This means way more use cases than ever possible with flat images — even without a pair of 3D glasses.

The simplest examples are for image and video editing. An image with depth information would allow you to change things like focus or lighting after it’s been taken. Apps could go so far as to add or remove entire objects from a scene, with lighting, shading and occlusion properly accounted for. Depth will bring eye-catching special effects to everybody the way high-resolution smartphone cameras made everyone a photographer.

Allied Market Research predicts 80 percent of smartphones will carry 3D sensors by 2018.

And that’s just the start. Full-sized 3D sensors are already being used for 3D scanning, a process that maps the exact size, shape and colors of a given object — anything from a human face to priceless works of art. The IDF15 keynote had Intel CEO Brian Krzanich quickly calculating storage space needed for objects by scanning them with a 3D sensor.

Other popular uses would be for virtual and augmented reality, where depth could capture a real environment (like an apartment for sale or a teleconferencing room) and let anyone in the world come for a virtual visit. We could scan our faces and stick them on a 3D avatar for use in teleconferencing or even video games. Further applications exist for 3D printing, design, mapping, object recognition, facial recognition, gesture-based control and more.

But How Will They Work?

Intel’s RealSense, like Microsoft’s more famous Kinect cameras, work by projecting a signal (a laser or infrared light) and measuring how it bounces or otherwise interacts with the environment. They work well for indoor situations and benefit from an existing base of software designed for this technique, and the high-consumer awareness makes it an attractive choice for vendors.

Another technique is stereovision, a method used by Apple’s new company LinX. Based on human depth perception, this kind of sensor takes in feeds from two cameras (eyes) and compares the difference in horizontal placement of each object to calculate how far away it is. Stereovision isn’t subject to interference the way “active” depth sensing techniques are, meaning the potential for greater viability outdoors and at longer ranges.

Two big challenges have kept stereovision out of the limelight: comparing the feeds in real time takes lots of computing power, and calibrating multiple cameras gets complicated. However, recent advances in mobile chipsets are beginning to remedy these issues. We can expect to see both RealSense-style and LinX-style solutions on the market in the near future.

Not Another Flash In The Pan

Memories of the not-so-memorable attempts at 3D smartphones a few years ago may stir up doubts of this trend. But 3D tech those days was too primitive; all they could do was hope to bring Hollywood-style 3D to a new platform, as movies were the only consumer-facing application where capturing depth had any success. Not only has the technology since improved in raw performance, but it’s been validated in far more use cases.

More importantly, consumer 3D is hot. Facebook, Google and Microsoft are pouring billions into VR/AR research and acquisitions, and dominate headlines with each new build. Real-time environment maps provided by depth sensors are being used to test self-driving taxis and autonomous delivery drones. And with the total registered mobile developers growing by hundreds of thousands each year, we’re sure to see loads of apps that make use of 3D sensors in ways even experts can’t predict.

So when you take your first selfie with the next generation of mobile devices, put on your best smile: That picture will be worth much more than a thousand words.