A team of scientists at Columbia University has developed a brain-controlled hearing aid that can recognize and decode a person's voice in a crowd. It does so by using a combination of brain feedback and speakers to mimic what a person's own brain does. Using complex mathematical equations and algorithms the hearing aid can imitate the brain's natural ability to filter out specific sounds.
“The brain area that processes sound is extraordinarily sensitive and powerful; it can amplify one voice over others, seemingly effortlessly, while today’s hearings aids still pale in comparison,” says Nima Mesgarani, PhD, the paper's senior author and a principal investigator at Columbia University's Mortimer B. Zuckerman Mind Brain Behavior Institute. “By creating a device that harnesses the power of the brain itself, we hope our work will lead to technological improvements that enable the hundreds of millions of hearing-impaired people worldwide to communicate just as easily as their friends and family do.”
From the Columbia University article: "The Columbia team’s brain-controlled hearing aid is different. Instead of relying solely on external sound-amplifiers, like microphones, it also monitors the listener’s own brain waves.
Using this knowledge the team combined powerful speech-separation algorithms with neural networks, complex mathematical models that imitate the brain’s natural computational abilities. They created a system that first separates out the voices of individual speakers from a group, and then compares the voices of each speaker to the brain waves of the person listening. The speaker whose voice pattern most closely matches the listener’s brain waves is then amplified over the rest.
'Our end result was a speech-separation algorithm that performed similarly to previous versions but with an important improvement,” said Dr. Mesgarani. “It could recognize and decode a voice — any voice — right off the bat.'"
Read more about the hearing aid that can pick out voices in a crowd at Columbia University.