Advertisement

Can you hear me now? AI-coustics to fight noisy audio with generative AI

Noisy recordings of interviews and speeches are the bane of audio engineers' existence. But one German startup hopes to fix that with a unique technical approach that uses generative AI to enhance the clarity of voices in video.

Today, AI-coustics emerged from stealth with €1.9 million in funding. According to co-founder and CEO Fabian Seipel, AI-coustics' technology goes beyond standard noise suppression to work across -- and with -- any device and speaker.

"Our core mission is to make every digital interaction, whether on a conference call, consumer device or casual social media video, as clear as a broadcast from a professional studio," Seipel told TechCrunch in an interview.

Seipel, an audio engineer by training, co-founded AI-coustics with Corvin Jaedicke, a lecturer in machine learning at the Technical University of Berlin, in 2021. Seipel and Jaedicke met while studying audiotechnology at TU Berlin, where they often encountered poor audio quality in the online courses and tutorials they had to take.

"We've been driven by a personal mission to overcome the pervasive challenge of poor audio quality in digital communications," Seipel said. "While my hearing is slightly impaired from music production in my early twenties, I’ve always struggled with online content and lectures, which led us to work on the speech quality and intelligibility topic in the first place."

The market for AI-powered noise-suppressing, voice-enhancing software is very robust already. AI-coustics' rivals include Insoundz, which uses generative AI to enhance streamed and pre-recorded speech clips, and Veed.io, a video editing suite with tools to remove background noise from clips.

But Seipel says AI-coustics has a unique approach to developing the AI mechanisms that do the actual noise reduction work.

The startup uses a model trained on speech samples recorded in the startup's studio in Berlin, AI-coustics' home city. People are paid to record samples -- Seipel wouldn't say how much -- that then get added to a data set to train AI-coustics' noise-reducing model.

"We developed a unique approach to simulate audio artifacts and problems -- e.g. noise, reverberation, compression, band-limited microphones, distortion, clipping and so on -- during the training process," Seipel said.

I'd wager that some will take issue with AI-coustics' one-time compensation scheme for creators, given the model that the startup is training could turn out to be quite lucrative over the long run. (There's a healthy debate over whether creators of training data for AI models deserve residuals for their contributions.) But perhaps the bigger, more immediate concern is bias.

It's well-established that speech recognition algorithms can develop biases -- biases that end up harming users. A study published in The Proceedings of the National Academy of Sciences showed speech recognition from leading companies were twice as likely to incorrectly transcribe audio from Black speakers as opposed to white speakers.

In an effort to combat this, Seipel says AI-coustics is focusing on recruiting "diverse" speech sample contributors. He added: "Size and diversity are key to eliminating bias and making the technology work for all languages, speaker identities, ages, accents and genders."

It wasn't the most scientific test, but I uploaded three video clips -- an interview with an 18th century farmer, a car driving demo and an Israel-Palestine conflict protest -- to AI-coustics' platform to see how well it performed with each. AI-coustics indeed delivered on its promise of boosting clarity; to my ears, the processed clips had far less ambient background noise drowning out speakers.

Here's the 18th century farmer clip before:

https://techcrunch.com/wp-content/uploads/2024/03/Interview-With-An-87-Year-Old-Farmer-1929.mp3

And after:

https://techcrunch.com/wp-content/uploads/2024/03/Interview-With-An-87-Year-Old-Farmer-1929-full-enhanced-by-aicoustics.mp3

Seipel sees AI-coustics' technology being used for real-time as well as recorded speech enhancement, and perhaps even being embedded in devices like soundbars, smartphones and headphones to automatically boost voice clarity. Currently, AI-coustics offers a web app and API for post-processing audio and video recordings, and an SDK that brings AI-coustics' platform into existing workflows, apps and hardware.

Seipel says that AI-coustics -- which makes money through a mix of subscriptions, on-demand pricing and licensing -- has five enterprise customers and 20,000 users (albeit not all paying) at present. On the roadmap for the next few months is expanding the company's four-person team and improving the underlying speech-enhancing model.

"Prior to our initial investment, AI-coustics ran a fairly lean operation with a low burn rate in order to survive the difficulties of the VC investment market," Seipel said. "AI-coustics now has a substantial network of investors and mentors in Germany and the U.K. for advice. A strong technology base and the ability to address different markets with the same database and core technology gives the company flexibility and the ability for smaller pivots."

Asked about whether audio mastering tech like AI-coustics might steal jobs like some pundits fear, Seipel noted AI-coustics' potential to expedite time-consuming tasks that currently fall to human audio engineers.

"A content creation studio or broadcast manager can save time and money by automating parts of the audio production process with AI-coustics while maintaining the highest speech quality," he said. "Speech quality and intelligibility still is an annoying problem in nearly every consumer or pro-device as well as in content production or consumption. Every application where speech is being recorded, processed, or transmitted can potentially benefit from our technology."

The funding took the form of an equity and debt tranche from Connect Ventures, Inovia Capital, FOV Ventures and Ableton CFO Jan Bohl.