Skip to main content

auditory cortex

Singing for the Fences

Posted on by

Credit: NIH

I’ve sung thousands of songs in my life, mostly in the forgiving company of family and friends. But, until a few years ago, I’d never dreamed that I would have the opportunity to do a solo performance of the Star-Spangled Banner in a major league ballpark.

When I first learned that the Washington Nationals had selected me to sing the national anthem before a home game with the New York Mets on May 24, 2016, I was thrilled. But then another response emerged: yes, that would be called fear. Not only would I be singing before my biggest audience ever, I would be taking on a song that’s extremely challenging for even the most accomplished performer.

The musician in me was particularly concerned about landing the anthem’s tricky high F note on “land of the free” without screeching or going flat. So, I tracked down a voice teacher who gave me a crash course about how to breathe properly, how to project, how to stay on pitch on a high note, and how to hit the national anthem out of the park. She suggested that a good way to train is to sing the entire song with each syllable replaced by “meow.” It sounds ridiculous, but it helped—try it sometime. And then I practiced, practiced, practiced. I think the preparation paid off, but watch the video to decide for yourself!

Three years later, the scientist in me remains fascinated by what goes on in the human brain when we listen to or perform music. The NIH has even partnered with the John F. Kennedy Center for the Performing Arts to launch the Sound Health initiative to explore the role of music in health. A great many questions remain to be answered. For example, what is it that makes us enjoy singers who stay on pitch and cringe when we hear someone go sharp or flat? Why do some intervals sound pleasant and others sound grating? And, to push that line of inquiry even further, why do we tune into the pitch of people’s voices when they are speaking to help figure out if they are happy, sad, angry, and so on?

To understand more about the neuroscience of pitch, a research team, led by Bevil Conway of NIH’s National Eye Institute, used functional MRI imaging to study activity in the region of the brain involved in processing sound (the auditory cortex), both in humans and in our evolutionary relative, the macaque monkey [1]. For purposes of the study, published recently in Nature Neuroscience, pitch was defined as the harmonic sounds that we hear when listening to music.

For humans and macaques, their auditory cortices lit up comparably in response to low- and high-frequency sound. But only humans responded selectively to harmonic tones, while the macaques reacted to toneless, white noise sounds spanning the same frequency range. Based on what they found in both humans and monkeys, the researchers suspect that macaques experience music and other sounds differently than humans. They also go on to suggest that the perception of pitch must have provided some kind of evolutionary advantage for our ancestors, and has therefore apparently shaped the basic organization of the human brain.

But enough about science and back to the ballpark! In front of 33,009 pitch-sensitive Homo sapiens, I managed to sing our national anthem without audible groaning from the crowd. What an honor it was! I pass along this memory to encourage each of you to test your own pitch this Independence Day. Let’s all celebrate the birth of our great nation. Have a happy Fourth!

Reference:

[1] Divergence in the functional organization of human and macaque auditory cortex revealed by fMRI responses to harmonic tones. Norman-Haignere SV, Kanwisher N, McDermott JH, Conway BR. Nat Neurosci. 2019 Jun 10. [Epub ahead of print]

Links:

Our brains appear uniquely tuned for musical pitch (National Institute of Neurological Diseases and Stroke news release)

Sound Health: An NIH-Kennedy Center Partnership (NIH)

Bevil Conway (National Eye Institute/NIH)

NIH Support: National Institute of Neurological Diseases and Stroke; National Eye Institute; National Institute of Mental Health


Can a Mind-Reading Computer Speak for Those Who Cannot?

Posted on by

Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York

Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.

Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.

When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.

Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.

As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech [1]. To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.

In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.

From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.

Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.

In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.

The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!

Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.

Reference:

[1] Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.

Links:

Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.

Nima Mesgarani (Columbia University, New York)

NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


LabTV: Curious About a Mother’s Bond

Posted on by

Bianca JonesThe bond between a mother and her child is obviously very special. That’s true not only in humans, but in mice and other animals that feed and care for their young. But what exactly goes on in the brain of a mother when she hears her baby crying? That’s one of the fascinating questions being explored by Bianca Jones Marlin, the young neuroscience researcher featured in this LabTV video.

Currently a postdoctoral fellow at New York University School of Medicine, Marlin is particularly interested in the influence of a hormone called oxytocin, popularly referred to as the “love hormone,” on maternal behaviors. While working on her Ph.D.in the lab of Robert Froemke, Marlin tested the behavior and underlying brain responses of female mice—both mothers and non-mothers—upon hearing distress cries of young mice, which are called pups. She also examined how those interactions changed with the addition of oxytocin.

I’m pleased to report that the results of the NIH-funded work Marlin describes in her video appeared recently in the highly competitive journal Nature [1]. And what she found might strike a chord with all the mothers out there. Her studies show that oxytocin makes key portions of the mouse brain more sensitive to the cries of the pups, almost as if someone turned up the volume.

In fact, when Marlin and her colleagues delivered oxytocin to the brains (specifically, the left auditory cortexes) of mice with no pups of their own, they responded like mothers themselves! Those childless mice quickly learned to perk up and fetch pups in distress, returning them to the safety of their nests.

Marlin says her interest in neuroscience arose from her experiences growing up in a foster family. She witnessed some of her foster brothers and sisters struggling with school and learning. As an undergraduate at Saint John’s University in Queens, NY, she earned a dual bachelor’s degree in Biology and Adolescent Education before getting her license to teach 6th through 12th grade Biology. But Marlin soon decided she could have a greater impact by studying how the brain works and gaining a better understanding of the biological mechanisms involved in learning, whether in the classroom or through life experiences, such as motherhood.

Marlin welcomes the opportunity that the lab gives her to “be an explorer”—to ask deep, even ethereal, questions and devise experiments aimed at answering them. “That’s the beauty of science and research,” she says. “To be able to do that the rest of my life? I’d be very happy.”

References:

[1] Oxytocin enables maternal behaviour by balancing cortical inhibition. Marlin BJ, Mitre M, D’amour JA, Chao MV, Froemke RC. Nature. 2015 Apr 23;520(7548):499-504.

Links:

LabTV

Froemke Lab (NYU Langone)

Science Careers (National Institute of General Medical Sciences/NIH)

Careers Blog (Office of Intramural Training/NIH)

Scientific Careers at NIH

 


Vision Loss Boosts Auditory Perception

Posted on by

Image of green specks with blobs of blue centered around a large red blob with tentacles

Caption: A neuron (red) in the auditory cortex of a mouse brain receives input from axons projecting from the thalamus (green). Also shown are the nuclei (blue) of other cells.
Credit: Emily Petrus, Johns Hopkins University, Baltimore

Many people with vision loss—including such gifted musicians as the late Doc Watson (my favorite guitar picker), Stevie Wonder, Andrea Bocelli, and the Blind Boys of Alabama—are thought to have supersensitive hearing. They are often much better at discriminating pitch, locating the origin of sounds, and hearing softer tones than people who can see. Now, a new animal study suggests that even a relatively brief period of simulated blindness may have the power to enhance hearing among those with normal vision.

In the study, NIH-funded researchers at the University of Maryland in College Park, and Johns Hopkins University in Baltimore, found that when they kept adult mice in complete darkness for one week, the animals’ ability to hear significantly improved [1]. What’s more, when they examined the animals’ brains, the researchers detected changes in the connections among neurons in the part of the brain where sound is processed, the auditory cortex.