Sayso is launching an API to dial down people’s accents a wee bit

Struggling to understand your heavily accented co-worker? Can’t follow what the customer support person at the other end of the phone is saying? Technology rushes to the rescue. It turns out that listening to an accent you’re not familiar with can dramatically increase the cognitive load (and, by extension, the amount of energy you expend to understand someone). Sayso is attempting to tackle this problem, by giving developers an API that can change accented English from one accent to another in near real time.

As someone who speaks with an accent, I have mixed feelings about this technology. I like a bit of diversity in how people around me sound, and it’s easy to see how this technology could be abused; it wouldn’t be awesome, for example, if everyone who speaks with a certain accent was automatically “corrected” into the same accent. On the other hand, people do choose to use Zoom backgrounds and TikTok filters, and if handled well, it’s pretty easy to see how someone could opt-in to reduce the presence of a heavy accent for “cosmetic,” accessibility, or legibility reasons; and there’s no shortage of people who aren’t able to use voice recognition systems due to accents. Funny memes and people shouting at their cars aside, it’s a real problem.

A lot of speech-to-text technologies use natural language processing (NLP) to take a qualified guess at what someone is saying. Sayso’s technology doesn’t care about the actual words; it takes the individual sounds and changes them to make them more legible.

“We don’t do anything with words and sentences. Instead, we do direct waveform operation — we work with disentangled speech elements. What I mean by that is things like voice, intonation, speech, content, accent, we can work with fillers, like uhms, and aahs. And we can alter one component or multiple components at a time, and we can alter it in real time if we want,” explains Ganna Tymco, founder and CEO of Sayso. “When we started, the goal was to help people understand each other with ease. But then this vision extended communicating clearly with technology. That’s the bigger, broader vision, with speech recognition and speaker smart technologies that are speaker-specific.”

The company explains that it approaches speech in an organic way; the way the mouth, tongue and lips shape sounds, and how vocal cords add some spice to the mix.

Articulatory gestures are just groups of sounds. The interesting part is that this is languageĀ  and accent independent. Our mouth can produce only a certain number of sounds, no matter which language is used. Our voice gets filtered with those articulatory gestures, and the output is much more complex. We take this soundwave, and we chop it in very small chunks — millisecond in length,” explains Tymco. “This is suitable for real-time processing. We map speech that is of one accent to a different accent. So we have parallel data, and we teach our system to see how the sound wave for the speaker with an accent would look like versus the speaker who is talking. And then we alter the shape of the sound wave to match it more to the desired accents. The really neat thing about it is that it is universal. So it’s, it’s independent of accent.”

The company started mapping particular accent pairs. Sayso started training its systems with Hindi English and U.S. English accent pairs, but then expanded with Chinese, Spanish and Japanese accents as well. The system doesn’t take cadence, word choice, tone and emphasis into consideration. In fact, it prides itself in being able to alter as little as possible about the sound; just mapping certain sounds to make the accents more legible. It can seem non-politically-correct (not to mention unspeakably boring) to change everyone’s voices into sounding like Brad Pitt or Angelina Jolie, but the founder assured me that it’s more nuanced than that. With a future version of the company’s tech, if it is my preference that everyone I speak to sounds like they have a dodgy Dutch accent, like my own, that is possible. It would also be possible to map all accents to the one everyone is more familiar with — which means that everyone on the call could hear a different accent, the most similar to their own.

“Diversity and inclusion and accessibility are at the heart of what I do here. I started this because I have an accent and because people don’t understand it. I was working for a really large company here in Silicon Valley,” explains Tymco, as she declined to name the company in question. “I made the video for them. I used my voice to do a voiceover. They liked the video, and they didn’t want to change a single thing, but said that my voice wasn’t suitable. I was like, hey, like, what is wrong with my voice? I was wondering if there was software I could use to change my accent. There wasn’t, and they had to hire an actor and redo the whole thing. But it made me think about this very deeply.”

The company argues that people who are used to each other’s accents understand each other more easily. If you’re in New Zealand, understanding other Kiwis is easier than deciphering a Scottish accent, for example.

“We really want people to have an easier time understanding each other, and what is easiest to understand is what we’re most familiar with. We are starting with something that is relatively universal as an MVP,” explains Tymco. “But We can change anything to anything. And the goal is for you to choose what sounds easier for you when you listen to somebody. I think accents are beautiful, and I don’t want to erase them.”

Even though accent-changing may turn out to be a moral and/or ethical hellscape, there may also be more technical reasons for Sayso’s technology. For example, when I interview entrepreneurs, I record my interviews and use a transcription service to ensure I have a written representation of the interview. There’s a very strong correlation to how close a founder’s accent is to Standard Hollywood English and how good the transcription is. For someone with a strong Dutch or Indian accent, the transcriptions are far worse — processing the audio through a Sayso-like filter before trying to run transcription on the audio file may result in far better transcriptions.

“[transcription] is part of our business strategy,” explains Tymco. “Automatic subtitles, for example, can be way off. I’m often astonished by how bad they are, and nobody checks them manually. Our tech is definitely applicable to transcription.”

The company provided a demonstration to show a snapshot of what the converted speech sounds like: