Largest text-to-speech AI model yet shows ’emergent abilities’

Image Credits: Carol Yepes / Getty Images

Researchers at Amazon have trained the largest ever text-to-speech model yet, which they claim exhibits “emergent” qualities improving its ability to speak even complex sentences naturally. The breakthrough could be what the technology needs to escape the uncanny valley.

These models were always going to grow and improve, but the researchers specifically hoped to see the kind of leap in ability that we observed once language models got past a certain size. For reasons unknown to us, once LLMs grow past a certain point, they start being way more robust and versatile, able to perform tasks they weren’t trained to.

That is not to say they are gaining sentience or anything, just that past a certain point their performance on certain conversational AI tasks hockey sticks. The team at Amazon AGI — no secret what they’re aiming at — thought the same might happen as text-to-speech models grew as well, and their research suggests this is in fact the case.

The new model is called Big Adaptive Streamable TTS with Emergent abilities, which they have contorted into the abbreviation BASE TTS. The largest version of the model uses 100,000 hours of public domain speech, 90% of which is in English, the remainder in German, Dutch and Spanish.

At 980 million parameters, BASE-large appears to be the biggest model in this category. They also trained 400M- and 150M-parameter models based on 10,000 and 1,000 hours of audio respectively, for comparison — the idea being, if one of these models shows emergent behaviors but another doesn’t, you have a range for where those behaviors begin to emerge.

As it turns out, the medium-sized model showed the jump in capability the team was looking for, not necessarily in ordinary speech quality (it is reviewed better but only by a couple points) but in the set of emergent abilities they observed and measured. Here are examples of tricky text mentioned in the paper:

“These sentences are designed to contain challenging tasks – parsing garden-path sentences, placing phrasal stress on long-winded compound nouns, producing emotional or whispered speech, or producing the correct phonemes for foreign words like “qi” or punctuations like “@” – none of which BASE TTS is explicitly trained to perform,” the authors write.

Such features normally trip up text-to-speech engines, which will mispronounce, skip words, use odd intonation or make some other blunder. BASE TTS still had trouble, but it did far better than its contemporaries — models like Tortoise and VALL-E.

There are a bunch of examples of these difficult texts being spoken quite naturally by the new model at the site they made for it. Of course these were chosen by the researchers, so they’re necessarily cherry-picked, but it’s impressive regardless. Here are a couple, if you don’t feel like clicking through:

https://techcrunch.com/wp-content/uploads/2024/02/shh-its-starting.wav?_=1

https://techcrunch.com/wp-content/uploads/2024/02/how-french.wav?_=2

https://techcrunch.com/wp-content/uploads/2024/02/guiding-moonlight.wav?_=3

Because the three BASE TTS models share an architecture, it seems clear that the size of the model and the extent of its training data seem to be the cause of the model’s ability to handle some of the above complexities. Bear in mind this is still an experimental model and process — not a commercial model or anything. Later research will have to identify the inflection point for emergent ability and how to train and deploy the resulting model efficiently.

A representative for Amazon AI, Leo Zao (not an author of the paper), wrote that they don’t make any claims of exclusive emergent properties here.

“We think it’s premature to conclude that such emergence won’t appear in other models. Our proposed emergent abilities test set is one way to quantify this emergence, and it is possible that applying this test set to other models could produce similar observations. This is partly why we decided to release this test set publicly,” he wrote in an email. “It is still early days for a ‘Scaling Law’ for TTS, and we look forward to more research on this topic.”

Notably, this model is “streamable,” as the name says — meaning it doesn’t need to generate whole sentences at once but goes moment by moment at a relatively low bitrate. The team has also attempted to package the speech metadata like emotionality, prosody and so on in a separate, low-bandwidth stream that could accompany vanilla audio.

It seems that text-to-speech models may have a breakout moment in 2024 — just in time for the election! But there’s no denying the usefulness of this technology, for accessibility in particular. The team does note that it declined to publish the model’s source and other data due to the risk of bad actors taking advantage of it. The cat will get out of that bag eventually, though.

Latest Stories