More than a year ago, Matthew Panzarino wrote an article on TechCrunch in which he described a new type of mobile app experience that he coined the Invisible App. He predicted that we would imminently see the rise of a huge number of apps that would live in the background, anticipating our needs based on sensor and contextual data, and do things for us before we even had to ask. What an exciting vision.
The trends are clearly on the side of the invisible app. Our phones are getting more and more sensors, and these tiny components are also getting faster and more accurate, and draining significantly less battery. In fact, the newest smartphones have more than 10 sensors that can combine to detect things as subtle as whether you are driving up a hill or biking down a hill.
Now the computer in our pocket can not only access all the information in the world, but also can automatically understand where we are, what we are doing and what is happening around us.
Then the Apple Watch was announced, and Matthew’s vision grew even more exciting and more likely. This new product would now be able to take all the sensor-intelligence our invisible apps would be producing and have a place to conveniently display it — our wrist. Running into a meeting and need to know the background of the person with whom you are meeting? Your calendar would anticipate this and conveniently present that information on your wrist.
But more than a year later, these apps haven’t emerged. Where is the app that knows I am still in the office at 8 pm and orders me food? Where is the app that knows I am out and I’ve had one too many beers and orders me an Uber? Where is the app that scolds me for not going to the gym this week?
Why Aren’t Invisible Apps Here?
If our phones are getting smarter with more sensors, and using that intelligence is a widely understood opportunity, why aren’t the apps built on our phones also using those sensors to be smarter and anticipate our needs?
The first issue that developers of invisible apps face is privacy. The data that invisible apps need is highly sensitive and needs to be guarded and treated with immense care. Best practices, regulations and legal precedent are only just emerging for location data, and have quite a way to go for all other sensor data like activity, altitude and ambient lighting. Many consumers are hesitant to give apps access to this data because it’s not fully clear how this data will be used and safeguarded, which makes developers hesitant to build experiences that require it.
For invisible apps to become widespread, industry associations will need to be formed, best practices around privacy will need to be created and all of this will need to be communicated to consumers.
Sensors, which are the cornerstone of the invisible app experience, have made huge strides with regard to battery efficiency, but they still require quite a bit of power. What person hasn’t watched their battery drain with alarming speed while they used navigation software? The good news is that it is absolutely possible to build services that use GPS and other sensors in the background with minimal impact on battery life.
The bad news is that optimizing how, when and which sensors you pull to reduce impact on your battery is nuanced and complex, and requires drawn out battery tests that can take many weeks and months to perform. This is outside the scope of most development projects.
For developers and consumers to get comfortable using sensors more regularly and in the background, battery optimizations need to be as easy as selecting from within a few options of battery drain versus accuracy.
Apple and Android both provide API access to sensor data on phones. However, there are literally dozens of APIs to learn, each functions differently and documentation is incredibly sparse. This makes it very difficult for developers to build even simple logic into their apps. For instance, on Android, just for location, there is Geofences, Passive Provider, Network Location Provider, GPS Location Provider and Fused Location API, which has No Power, Low Power, Balanced Power and High Accuracy modes.
Each of these behaves differently under different conditions, but reading documentation alone won’t make it possible to implement a decent experience. You’ll have to spend a lot of time working with each to figure out how to actually achieve what you are trying to build.
The current hodgepodge of sensor APIs requires developers to spend too much time understanding the underlying sensor and the software derivatives. iOS and Android need to simplify their APIs and provide better documentation, or a layer needs to emerge that simplifies developing on sensors.
Converting Raw Data Into Intelligence
Most of the sensor APIs provide raw data, such as a Latitude and Longitude, or a string of numbers representing the Y axis movement. This puts a tremendous onus on the developer to turn sensor data into actionable intelligence. To do this well, app developers would need to collect hundreds of millions, if not billions, of data points. One of the rare places where Android and iOS provide meaningful intelligence over raw data is with the Activity Monitor, which lets apps know whether their user is walking, driving or running.
The utopian future where our apps know what we need before we even have to ask is coming.
At this year’s Google I/O it was revealed that this basic functionality was achieved with between 20,000 and 60,000 data points. To get to anything more interesting than whether someone is walking would require an order of magnitude more data points, which is an enormous effort outside the reach of most startups.
The underlying collection and analysis of sensor data is not something every developer should need to undertake. Apps need to be able to code against use cases and not against raw sensor data.
The utopian future where our apps know what we need before we even have to ask is coming. The opportunity is too significant and the mega-trends too strong to stop this from happening. However, there are privacy, technical and infrastructure hurdles that must be overcome, as well as foundational pieces that need to be built.
Once it is easy to build accurate, battery-efficient and privacy-friendly apps with sensor intelligence, I believe we will see a huge rush of entirely new apps and experiences that will make us even more productive and more efficient, and make our phones even more central to our lives.