Isla Public Media KPRG
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

A brain-controlled system may help listeners with hearing loss cut through the noise

Scientists say they've developed brain-decoding technology that could help people who use hearing assistance devices pick out one voice in a crowded room —  a longstanding challenge for hearing aids.
Matteo Farinella
/
Columbia University's Zuckerman Institute
Scientists say they've developed brain-decoding technology that could help people who use hearing assistance devices pick out one voice in a crowded room — a longstanding challenge for hearing aids.

Imagine a crowded room. It's a chaos of sound, teeming with indistinct voices.

Scientists call this the cocktail party problem. To overcome it, most people are able to focus on a single speaker's voice, which cues the brain to amplify that sound and turn down the rest.

For people who use hearing aids, though, that process becomes a lot harder.

Now, in the journal Nature Neuroscience, a team describes a solution that decodes a person's brain waves to choose which voice their hearing system will amplify.

It amounts to a "brain-controlled hearing aid," says Nima Mesgarani, an author of the paper and an associate professor at Columbia University who runs the school's Neural Acoustic Processing Lab. The new approach could lead to better hearing technology, including hearing aids, assistive listening devices and cochlear implants.

But so far, the approach has been tested only on four people with typical hearing, says Josh McDermott, who runs the Laboratory for Computational Audition at MIT and was not involved in the study.

Whether the system will work as well for people with hearing loss remains an "open question," he says.

How the brain filters sound

The new research is based on a discovery made in 2012 by Mesgarani and Dr. Eddie Chang, a neurosurgeon at the University of California, San Francisco.

The finding helps explain how the brains of people with typical hearing are able to solve the cocktail party problem by selecting one voice to amplify while filtering out others.

Mesgarani and Chang showed that the key is a distinct pattern of brain waves in the auditory cortex, which processes sounds.

"When you look at the brain of a listener at the cocktail party," Mesgarani says, "what you see is that these brain waves are tracking only the sound that [the listener] is focusing on, and not the other sources."

The pattern of activity "gives us a signature," Mesgarani says. "We can look at someone's brain and decide, oh yeah, this is the source they want to listen to."

So the team set out to see whether they could use that neural signature to improve hearing systems. The effort was led by Vishal Choudhari, who was a graduate student in Mesgarani's lab at the time. He's currently a research scientist at a startup working on next-generation hearing technologies.

The team did an experiment with four people who were in the hospital for epilepsy treatment.

The participants, who had typical hearing, already had electrodes in their brains as part of their treatment. That allowed the team to monitor signals coming from their auditory cortex.

Mesgarani says the next step was to simulate a cocktail party at the bedside.

"They have two loudspeakers in front of them," he says. "Each one is playing a different conversation."

At first, the competing conversations were played at the same volume.

That left the participants struggling to comprehend either one. Then, Mesgarani says, the team switched on a system that automatically adjusted the volume based on the person's brain waves.

"If the person wants to hear 'conversation one,' we make that louder and we make everything else softer," Mesgarani says.

The system correctly detected which conversation the person wanted to hear up to 90% of the time. And when it was switched on, "their comprehension went up and their listening effort [went] down," Mesgarani says.

A smarter hearing device

The system might be less accurate when reading the brain waves of people with hearing loss, McDermott says, because the signal is weaker. But he says it's worth trying because even the most advanced hearing aids can't focus on a specific voice.

"They have some pretty good algorithms for reducing background noise," McDermott says. But when it comes to competing voices, he says, the devices have no way to decide which one to amplify.

A brain-controlled hearing aid may be one way to address that problem, McDermott says. Another is to allow an artificial intelligence system to study a person's behavior and then use that knowledge to predict which voice is the most likely target.

Either way, there is growing demand for hearing systems that can solve the cocktail party problem. More than half of people 75 and older are living with disabling hearing loss.

"If you live long enough, you start to go deaf," McDermott says, "so it's a really important problem to be doing basic scientific research on."

Copyright 2026 NPR

Tags
Jon Hamilton
Jon Hamilton is a correspondent for NPR's Science Desk. Currently he focuses on neuroscience and health risks.