Live 95.5, a radio station in Portland, Oregon, announced Tuesday that its midday host Ashley Elzinga will broadcast a cloned version of her voice — aka “AI Ashley” — to listeners every day from 10 a.m. to 3 p.m. They’ll be using Futuri Media’s “RadioGPT,” an AI-powered tool that uses GPT-4 to generate a script based on trending news and reads it with a synthetic voice.
The radio station noted that AI Ashley isn’t fully replacing “traditional Ashley,” which has been a huge concern for a lot of local DJs. Radio stations have cut down their broadcasts in recent years due to rising costs or in a shift to embrace tech like AI. In 2020, iHeartMedia restructured its organization, laid off hundreds of people nationwide and invested in artificial intelligence.
Live 95.5’s parent company, Alpha Media, assured us that Elzinga would not be losing her job and would still be paid the same salary.
“It’s a hybrid situation where we’ll have traditional Ashley on during some segments, and we’ll have AI Ashley on during other segments,” Phil Becker, Alpha Media EVP of Content, explained to TechCrunch. “In an instance where AI Ashley would be broadcasting, the traditional Ashley might be doing something in the community, managing social posts or working on digital assets or the other elements that come with the job.”
Becker also noted that Alpha Media isn’t using RadioGPT to save costs. It’s meant to be an efficient tool for radio hosts to have in their toolset.
Listeners may be wary of AI DJs, since they have tuned into local radio personalities for years and don’t necessarily want to listen to a robotic-sounding voice on their way to work in the morning. That’s why Live 95.5 trained the AI to sound like an existing radio host that the audience has already connected with.
As seen in a tweet where Elzinga shows off her AI DJ counterpart, the voice is similar to the real thing but very obvious that it’s AI. The synthetic voice also introduces itself as AI Ashley, so listeners know that an AI is speaking and not a real human.
“One of the absolute most important parts of this is that we’re transparent with the listener. It’s not our intent to ever deceive anybody,” added Becker.
Twitter users expressed their disappointment in the new decision. One user even said the AI voice was disrespectful to the entire radio profession.
While Live 95.5 is one of the first radio stations to use RadioGPT, it certainly won’t be the last. Alpha Media’s KUFO Freedom 970, another Portland-based radio station, will also be using the AI tech.
Alpha Media oversees more than 200 radio stations in the U.S. The company claims to be the first radio broadcasting company in the world to have an AI DJ.
Another Futuri partner is Rogers Sports & Media, which owns 55 stations and over 29 podcasts in Canada.
When Futuri launched the beta version of RadioGPT in February, people questioned the ramifications of the AI tech. One major concern was if there was a risk that AI DJs could spread misinformation. Fact-checking isn’t ChatGPT’s strong suit and it has been known to “hallucinate,” meaning it confidently provides users with fake information.
Bing and Google both have AI chatbots that users have criticized for making factual errors.
ChatGPT relies on the help of human moderation, so it’s important that radio stations using RadioGPT have real people checking the generated content.
Futuri’s tech scans over 250,000 news sources as well as stories on Facebook, Twitter and Instagram to identify which topics are trending locally. RadioGPT will take these trending stories and then use GPT-3 to create a script. Before the AI voices relay the info to listeners in that local area, Becker claims that a team of human moderators reviews and edits the content before it goes out on the air.
“You’ll still have to have the original content creators touching it, checking it, proving it, all of those sorts of things,” Becker said.
Plus, Futuri CEO and founder Daniel Anstandig claims that safeguards are in place to prevent hallucinatory content from going out over the air.
“Our AI is programmed to cross-check and reduce the occurrence of speculative or inaccurate content, as is sometimes referred to as ‘hallucination,’” Anstandig claims. RadioGPT also “confirms that any generated content does not contain any offensive material,” he added.