Three years ago, the name of the game at Apple’s WWDC was the introduction of Siri, the company’s voice recognition-based personal assistant integrated into the iPhone 4S. Today, while Apple is not giving Siri as big of a stage, it announced upgrades that show how the company is further building out the service, laying the groundwork for when it will become a more central part of how consumers interact with Apple devices. Features announced today include streaming voice recognitions, 22 languages and Shazam, along with integration into Touch ID.
The first of these will let users effectively see on the screen what their words are translating to — helping you correct yourself before Siri leads you down the garden path. This is keeping in line with another way of integrating voice features more closely into messaging, with a “tap to talk” feature.
The Shazam integration was actually something that people had been floating as a possibility for a while now. It will let people ask Siri, “What song is playing?” and Siri will call up the answer using Shazam’s audio database. It shows how music will continue to be a central part of how Apple develops iOS, and will integrate it further into the core functions of devices running the operating system. (And that gives you one more indication of why it was so important for Apple to own its own music streaming company and have its own home-team talent working on how to use it.)
We still have to get a full list of the 22 languages, but it underscores how Apple is increasingly trying to take out the kind of traction that it has achieved in some markets like Western Europe, the U.S. and Japan, into markets that are still seeing rapid smartphone adoption. Given how much Android has gained in the wider world in terms of its market share (it’s now accounting for over 80 percent of all smartphone shipments) it’s important for Apple to continue making its iterations and bells and whistles something that speaks to the wider market.
What we have yet to hear about (and likely will not today? although the WWDC keynote is still in progress) are developments on some of the other reports that have been floating around. Namely, evidence that Siri would be getting more third-party integrations, partly in preparation for devices like an iWatch where speech would become an even more important way for a person to interact; or a confirmation of more developments on services that Apple has been promising itself: specifically that it would be integrating Siri’s “Eyes Free” features into selected in-car systems.
Prior to today, Apple had been making some other key developments on the voice recognition front. Earlier this year, it quietly acquired a speech recognition company based in the UK, Novauris, whose natural language and speech recognition specialists now work on Siri.
And it has been making official, gradual enhancements to Siri, such as integrating it into its podcast app.
More to come.