AI

Don’t be afraid of the ‘AI-assisted’ Beatles song, ‘Now And Then’

Comment

Computer mixing interface fading into image of a cassette tape.
Image Credits: The Beatles / YouTube / TechCrunch

There’s been a bit of FUD around the decision to use a form of AI to resurrect John Lennon’s voice in what Paul McCartney called “the last Beatles record.” What they’ve done is far from the sketchy AI imitations of artists we see cluttering Soundcloud today, and has much more in common with a more prosaic application of machine learning: noise reduction.

To hear people talk about it, you’d think this was an abject money-grab using the latest voice synthesis tech to ape one of the most famous songwriters of all time. But the real story is more simple and poignant than that, and the technology is far less fantastic.

As the members of the band recall in a sweet short film about the making of the song, “Now And Then” was originally a piano demo Lennon made shortly before he was killed in 1980. His widow, Yoko Ono, provided the tape on which it was recorded to the band afterwards, but the quality of the recording was not great — bad, even.

“When we listened to ‘Now And Then,’ it was very difficult because John was sort of hidden in a way,” says Ringo Starr in the making-of short.

“Every time I wanted a little more of John’s voice,” recalled McCartney. “This piano came through and clouded the picture. And in those days, of course, we didn’t have the technology to do the separation.”

They “ran out of steam” in 1995 when they tried to rescue the song, but in 2022 they were working with Peter Jackson on the documentary “Get Back.” The filmmaker and his team were applying modern audio processing technology to archival footage of the band to isolate individual instruments and voices.

“We were paying a lot of attention to the technical restoration. That ultimately led us to develop a technology which allows us to take any soundtrack and split all the different components into separate tracks based on machine learning,” said Jackson in the short.

MAL, as they called it, is a version of audio isolation technologies that have come a long way in the last few years. Machine learning models can be trained on, say, many guitar tracks and learn what the waveform or spectral signature of a guitar is, and can with varying success pluck it right out of a mixed track.

It’s commonly used in video calls now as well, using models trained on human voices. By suppressing everything that isn’t the speaker’s voice, background noise like barking dogs or a loud cafe can be silenced in real time. Cruder versions of this were sometimes used to make karaoke versions of songs, identifying and removing the vocal tracks.

In the case of Lennon’s demo, it worked like a charm, as you can hear at this timestamp in the making-of short.

“There it was — John’s voice, crystal clear,” said Paul. “Now we could mix it and make a proper record of it.”

Some may question the ethics of making that record, but everyone involved seems to think John would have been all for it, as he loved tinkering with technology and had, of course, written and performed the song originally with the intention of recording it.

But more importantly, it seems to have acted as a bit of closure for the group. The vicissitudes of stardom and creativity they endured are more than adequately documented, but to lose a friend and creative partner of decades that way, and to have this last, lingering loose end dangling just out of reach must have been torturous.

As anyone who has lost someone can attest, every vestige of them becomes precious. “To hear John’s voice… that’s a thing we should cherish,” George Harrison had said back in 1995.

And now with a quarter-century’s worth of technological improvements brought to bear, that’s exactly what they could do.

“It was the closest we’ll ever come to having him back in the room,” said Ringo.

You can listen to “Now And Then” right here.

More TechCrunch

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, Ask Photos

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning