To commercialize, voice tech must first solve its ‘cocktail party problem’

On average, men and women speak roughly 15,000 words per day. We call our friends and family, log into Zoom for meetings with our colleagues, discuss our days with our loved ones, or if you’re like me, you argue with the ref about a bad call they made in the playoffs.

Hospitality, travel, IoT and the auto industry are all on the cusp of leveling-up voice assistant adoption and the monetization of voice. The global voice and speech recognition market is expected to grow at a CAGR of 17.2% from 2019 to reach $26.8 billion by 2025, according to Meticulous Research. Companies like Amazon and Apple will accelerate this growth as they leverage ambient computing capabilities, which will continue to push voice interfaces forward as a primary interface.

As voice technologies become ubiquitous, companies are turning their focus to the value of the data latent in these new channels. Microsoft’s recent acquisition of Nuance is not just about achieving better NLP or voice assistant technology, it’s also about the trove of healthcare data that the conversational AI has collected.

Our voice technologies have not been engineered to confront the messiness of the real world or the cacophony of our actual lives.

Google has monetized every click of your mouse, and the same thing is now happening with voice. Advertisers have found that speak-through conversion rates are higher than click-through conversation rates. Brands need to begin developing voice strategies to reach customers — or risk being left behind.

Voice tech adoption was already on the rise, but with most of the world under lockdown protocol during the COVID-19 pandemic, adoption is set to skyrocket. Nearly 40% of internet users in the U.S. use smart speakers at least monthly in 2020, according to Insider Intelligence.

Yet, there are several fundamental technology barriers keeping us from reaching the full potential of the technology.

The steep climb to commercializing voice

By the end of 2020, worldwide shipments of wearable devices rose 27.2% to 153.5 million from a year earlier, but despite all the progress made in voice technologies and their integration in a plethora of end-user devices, they are still largely limited to simple tasks. That is finally starting to change as consumers demand more from these interactions, and voice becomes a more essential interface.

In 2018, in-car shoppers spent $230 billion to order food, coffee, groceries or items to pick up at a store. The auto industry is one of the earliest adopters of voice AI, but in order to really capture voice technology’s true potential, it needs to become a more seamless, truly hands-free experience. Ambient car noise still muddies the signal enough that it keeps users tethered to using their phones.

Simply selling more voice-enabled devices won’t magically solve the limitations of voice technology. There are two main challenges confronting the evolution of voice technologies: the understanding of intent and emotion, and overcoming issues associated with signal-to-noise ratios (SNR) in high-noise or crowded environments.

Do you understand the words coming out of my mouth?

Intent has been a core, and improving, focus of most NLP technologies. Swaths of data have been collected to help voice assistants better understand intent. While voice tech has advanced in certain areas, such as customer service channels, it still faces major challenges when confronted with understanding the myriad signals from the real world.

We have been able to grow capability to understand signals of intent in closed channels that require specific understanding — valuable for doing simple tasks, knowing when to escalate a customer’s problem to a human agent, or seamlessly directing customers through a limited set of options. For the tech to be viable in real-world situations, however, it must understand a much wider variety of situations and inputs.

Voice technologies currently work in conjunction with other data points from wearables, and as we gain more signals that we can correlate, we can begin to provide more agile and robust context for greater understanding in voice technologies.

Using human tools to solve human problems

Our voice technologies have not been engineered to confront the messiness of the real world or the cacophony of our actual lives.

The background noise and chatter challenge has been a difficult one for voice technologies to overcome. Much like intent and emotion, we have not engineered our voice technologies to parse real-world cacophony. This “cocktail party problem” is one of the greatest barriers to voice technologies reaching a level of understanding comparable to humans. Exacerbating this challenge is the fact that we simply can’t achieve adequate testing for this effect in a traditional lab environment.

The growing adoption of voice in devices and the subsequent quality and quantity of data we now have offers the prospect of finally overcoming the cocktail party problem. It will be necessary for the technology to advance to its full usefulness.

Solving these problems requires voice tech to meet the human standard for voice and match the complexities of the human auditory system. Yes, you need really good NLP and conversational AI, but this goes deeper — you have to be able to extract clean and complete signals.

When we develop voice strategies that account and solve for these challenges, the business proposition for voice becomes unavoidable. The underlying data takes on enormous value overnight. When you have a clean signal, you have access to contextual data that brands desperately need for quality customer engagements.

Such data will let you understand what type of purchasing decisions happen when a person is energetic or tired. It allows us to know what types of music should be played based on the mood. It allows us to identify speakers accurately and correlate behaviors to individuals in a household.

Better contextualization and understanding needs to be a priority so these technologies can develop past their current limitations. To unlock that real-world potential, we need to focus on real-world situations.