Skip to main content

auditory cortex

Can a Mind-Reading Computer Speak for Those Who Cannot?

Posted on by

Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York

Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.

Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.

When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.

Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.

As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech [1]. To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.

In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.

From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.

Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.

In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.

The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!

Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.

Reference:

[1] Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.

Links:

Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.

Nima Mesgarani (Columbia University, New York)

NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


LabTV: Curious About a Mother’s Bond

Posted on by

Bianca JonesThe bond between a mother and her child is obviously very special. That’s true not only in humans, but in mice and other animals that feed and care for their young. But what exactly goes on in the brain of a mother when she hears her baby crying? That’s one of the fascinating questions being explored by Bianca Jones Marlin, the young neuroscience researcher featured in this LabTV video.

Currently a postdoctoral fellow at New York University School of Medicine, Marlin is particularly interested in the influence of a hormone called oxytocin, popularly referred to as the “love hormone,” on maternal behaviors. While working on her Ph.D.in the lab of Robert Froemke, Marlin tested the behavior and underlying brain responses of female mice—both mothers and non-mothers—upon hearing distress cries of young mice, which are called pups. She also examined how those interactions changed with the addition of oxytocin.

I’m pleased to report that the results of the NIH-funded work Marlin describes in her video appeared recently in the highly competitive journal Nature [1]. And what she found might strike a chord with all the mothers out there. Her studies show that oxytocin makes key portions of the mouse brain more sensitive to the cries of the pups, almost as if someone turned up the volume.

In fact, when Marlin and her colleagues delivered oxytocin to the brains (specifically, the left auditory cortexes) of mice with no pups of their own, they responded like mothers themselves! Those childless mice quickly learned to perk up and fetch pups in distress, returning them to the safety of their nests.

Marlin says her interest in neuroscience arose from her experiences growing up in a foster family. She witnessed some of her foster brothers and sisters struggling with school and learning. As an undergraduate at Saint John’s University in Queens, NY, she earned a dual bachelor’s degree in Biology and Adolescent Education before getting her license to teach 6th through 12th grade Biology. But Marlin soon decided she could have a greater impact by studying how the brain works and gaining a better understanding of the biological mechanisms involved in learning, whether in the classroom or through life experiences, such as motherhood.

Marlin welcomes the opportunity that the lab gives her to “be an explorer”—to ask deep, even ethereal, questions and devise experiments aimed at answering them. “That’s the beauty of science and research,” she says. “To be able to do that the rest of my life? I’d be very happy.”

References:

[1] Oxytocin enables maternal behaviour by balancing cortical inhibition. Marlin BJ, Mitre M, D’amour JA, Chao MV, Froemke RC. Nature. 2015 Apr 23;520(7548):499-504.

Links:

LabTV

Froemke Lab (NYU Langone)

Science Careers (National Institute of General Medical Sciences/NIH)

Careers Blog (Office of Intramural Training/NIH)

Scientific Careers at NIH

 


Vision Loss Boosts Auditory Perception

Posted on by

Image of green specks with blobs of blue centered around a large red blob with tentacles

Caption: A neuron (red) in the auditory cortex of a mouse brain receives input from axons projecting from the thalamus (green). Also shown are the nuclei (blue) of other cells.
Credit: Emily Petrus, Johns Hopkins University, Baltimore

Many people with vision loss—including such gifted musicians as the late Doc Watson (my favorite guitar picker), Stevie Wonder, Andrea Bocelli, and the Blind Boys of Alabama—are thought to have supersensitive hearing. They are often much better at discriminating pitch, locating the origin of sounds, and hearing softer tones than people who can see. Now, a new animal study suggests that even a relatively brief period of simulated blindness may have the power to enhance hearing among those with normal vision.

In the study, NIH-funded researchers at the University of Maryland in College Park, and Johns Hopkins University in Baltimore, found that when they kept adult mice in complete darkness for one week, the animals’ ability to hear significantly improved [1]. What’s more, when they examined the animals’ brains, the researchers detected changes in the connections among neurons in the part of the brain where sound is processed, the auditory cortex.