In the few years I’ve been in Silicon Valley, if someone asked me to sum up — in one word — what defined and dominated consumer technology applications during that time, I’d have no choice but to answer: “Photos.” Now, it’s easy for others to sit back and roll their eyes at the thought of it. “Why not solve big problems?” an aggravated chorus might wail. Looking back over this time period, the big events touching on digital pictures gained outsized attention: The launch of iPhone 4, with its incredible camera; the meteoric rise and acquisition of Instagram; the technical achievement unlocked by Lytro; the influence of the Pinterest design on nearly every e-commerce site; our narcissistic addiction to Timehop or delight in depositing checks through our bank’s mobile app; today’s fascination with exploding pictures, courtesy of Snapchat; and on the horizon, one of the most anticipated interface advancements: Google Glass.
Because our digital photographs are inherently social objects, and because mobile platforms now create a massive global audience for companies to directly provide tools and software to, digital pictures have transformed into a strange, fierce battleground where bigger platform giants test the limits of privacy, throw friendly punches at their rivals, and experiment with new business models. Most coverage and analysis of this trend focuses on the end consumers, but for the purposes of this post, let’s consider digital photographs as being a byproduct of the advances in mobile camera technology, as the camera itself is perhaps the single most important sensor on our phones and tablets.
As the camera sensor improves over time, the opportunities around photosharing increase with it. Therefore, my belief is that we are still in the early innings of this digital photography craze, so if you’re tired of the meme, brace yourself because it will take years to unfold, and if you’re excited about this future, it’s a great time to get your hands dirty. First, let’s briefly consider what hardware changes we can anticipate: device battery life and processing speeds will only improve with time, perhaps opening the door for richer image capture, video capture, and so forth, and RGBd (“D” for depth) cameras will allow cameras to capture not only more of the the light field, but also the depth of objects from the sensor and in relation to each other which has big implications for the advancement of 3-D modeling. (Google Glass will eventually, we hope, have huge implications for passive image capture, both for consumers and for commercial activities.)
As the hardware advances the camera sensor capabilities, software will not be far behind. So far, consumers have glommed onto applications that filter, organize, and/or erase photographs. Lytro’s technology enables the capture of the entire light field with for post-capture focus (though I’m not sure if their technology will be incorporated or licensed in other devices). Moving forward, I’d expect more camera applications to offer more context around each photograph, such as auto-image/object detection, auto-tagging and classification (especially around location, such as Findery is working on), and auto-arrangement or organization. I’d also expect more technologies (like Stipple) to auto-fingerprint photographs to preserve their provenance as they’re shared across the web or before being screenshot and manipulated on mobile devices.
The tricky thing here is that only in hindsight do filters or boards or exploding pictures make sense. “Of course teenagers will want to make their pictures disappear like Tiger texts!” Therefore, in the future, we won’t really know what consumers will want until we see the new applications and experiments live, in the wild, and monitor their usage. What is certain, however, is that all of these advances will create opportunities for developers to build new applications and advance the collection, documentation, recognition (and more manipulation, including distortion), and sharing of digital pictures in ways that we’ve yet to imagine.
While I’m unable to articulate just what this future will look like, I do feel we will see more and more activity in this space, more experiments, more applications, and more things that perhaps start out looking like toys that may ultimately be new channels and modes of communication — as well as the societal implications we can expect when nearly everything “can” be captured. And, we don’t even know how the world will react to new image-based interfaces that will surely come to market, starting with Google Glass, perhaps a touch-enabled television with its own camera, and so much more. When it comes to images, it’s early innings, indeed.
Photo Credit: Tony Ratanen / Flickr Creative Commons