Alexa developers can now use notifications, soon personalize apps based on users’ voices

Amazon says it will allow Alexa skill developers to alert customers using notifications starting today, and soon, it will allow them to recognize users’ individual voices as part of their skill-building process. These changes, along with other developer enhancements, are being announced this morning at Amazon’s re:Invent conference in Las Vegas, where the company delved into the science behind its Alexa voice platform in a keynote address.

Alexa today is running away with the voice market for smart speakers by a wide margin. Strategy Analytics estimates Alexa will be on 68 percent of smart speakers by year-end, while other reports put it at an ever higher market share of 76 percent.

This traction has allowed Amazon to generate developer interest, despite the lack of a monetization model for their voice apps until more recently, with the introduction of paid subscriptions for Alexa skills. Despite the lag in allowing developers to profit from their work, Amazon says there are now over 25,000 third-party skills for Alexa, and the number of active customers is up by more than five times.

Today, the company is giving skill developers a more direct way to engage their customers, instead of relying only on voice commands. It’s expanding support for notifications, the company announced – meaning more developers will now be able to alert their app’s end users about updates and new content using lights and audio cues.

The feature, first publicly introduced in September, takes advantage of the LED light on Alexa-powered devices, which can turn green to indicate there’s something new. Early testers included The Washington Post, AccuWeather, and family locator Life360, which used notifications to send out alerts about breaking news, severe weather, and family location updates, respectively. The lights can be combined with a brief, audio cue to signal there’s new content or information available.

Alexa device owners then just ask something like “Alexa, what did I miss?” or “Alexa, what are my notifications?” to be filled in.

Amazon says this feature is already available for shopping and other select skills, like news, weather, food delivery, and more. Today, it’s expanding that developer preview by inviting others to apply to participate before making the feature generally available next quarter.

The company also said developers will soon be able to access new technology that can recognize the individual voices of the persons using their voice app. This is an expansion of Amazon’s “Your Voice” technology introduced in October, which allows Alexa devices to offer individualized experiences based on who’s asking – like personalized shopping lists or music selections, for example.

In early 2018, Amazon says this technology will make its way to third-party developers as well, allowing them to build personalized experiences into their skills.

The keynote address additionally highlighted other technologies for voice app developers, like the improvements to the Skill Builder tool, the Alexa Skills Kit Command Line Interface, Speech Synthesis Markup and Speechcons, natural language understanding, and more.

Amazon also announced the winner of its inaugural Alexa Prize, a university competition it held to advance the field of conversational A.I.

The winning team was Sounding Board, from the University of Washington, who had built a socialbot that engages users in thoughtful discussions. Sounding Board took home $500,000 for the first-place prize, followed by runner-up Alquist from Czech Technical University in Prague, and third-place winner, What’s Up Bot from Heriot-Watt University in Edinburgh, Scotland.

Amazon says the 2018 Alexa Prize competition will open to applications on December 4, 2017 and close on January 8, 2018.