Since Siri became a part of iOS in 2011, the idea of having a digital assistant that can help us get things done just by asking has been a central thrust of the big smartphone platform owners.
Less than a year after Apple rolled out its AI companion, Google followed suit with Google Now, a service that preemptively sends notifications based on contextual clues like your location, your email inbox, and traffic data.
Announced at this year’s Build conference, Microsoft also has a smart assistant: Cortana, named after the AI from the Halo franchise of games on the Xbox. Microsoft says Cortana is designed to act like like a secretary, reminding you about the things that you want to bring up when you call your wife on your way home.
Despite the niftiness of having your phone tell you to leave work 15 minutes early to make it to your flight or Siri and Cortana cracking the occasional joke, the limitations to these assistants are clear. Siri can set a reminder, but she can’t help you schedule a meeting; Google Now can suggest some restaurants nearby, but it can’t book you a table at your favorite place.
How do these assistants accomplish such advanced tasks? Does Jarvis know what kind of food its users like? Can it tap into your schedule while looking for flights to see what might work for you? Can it actually interact with people via email without sounding like a processed form?
Yes, but not because the startup has managed to create an advanced artificial intelligence that just happens to be flying under everyone’s radar. Instead, what they’ve done is create a TaskRabbit for digital tasks: on the other end of every request is a college-educated worker sitting at a computer, searching for the best food around your office or using price-comparison tools to find flights.
Even with the convenience that comes from modern apps, looking through Yelp for a decent place to take a business associate to dinner can still take quite a while, especially if you’re the indecisive type to begin with. I’ve spoken to a few users of these premium digital assistants, and one use case that kept coming up was getting three or four options to choose between to simplify decision-making processes. People just want someone to do the boring, trivial tasks for them.
In a few years, many of these tasks will be accomplished by software. Apple, Google, and Microsoft aren’t standing still with their efforts, and new companies (including one founded by the creators of Siri) are looking to create services that learn and tie in to all of your favorite apps to create digital assistants that are aware of your circumstances and the nuance of tasks that you want done.
But until then, it’s interesting to see humans essentially slotted into these services like cogs in a machine. The interfaces are there for users in the form of apps (or text and email for Jarvis, which means you can use it via Siri’s messaging capabilities), the data is there, but the ability to parse complex requests (or to take simple requests and understand what they mean in context) is years away.
And when that technology does arrive, I can’t help but wonder if these services will go away, or simply move further up the “experience stack.” Imagine it: People with too much on their plate could pay a company like Fancy Hands to manage aspects of their life that are just too tricky or awkward or just take a few seconds too long to handle with an AI assistant; and workers at that company could use AI in concert with traditional applications to become even more productive.