Skip to main content

auditory cortex

How the Brain Differentiates the ‘Click,’ ‘Crack,’ or ‘Thud’ of Everyday Tasks

Posted on by Lawrence Tabak, D.D.S., Ph.D.

A baseball player hits a ball. The word "crack" is highlighted. The word "thud" has a circle around and a diagonal line through it.
Credit: Donny Bliss, NIH; Shutterstock/Vasyl Shulga

If you’ve been staying up late to watch the World Series, you probably spent those nine innings hoping for superstars Bryce Harper or José Altuve to square up a fastball and send it sailing out of the yard. Long-time baseball fans like me can distinguish immediately the loud crack of a home-run swing from the dull thud of a weak grounder.

Our brains have such a fascinating ability to discern “right” sounds from “wrong” ones in just an instant. This applies not only in baseball, but in the things that we do throughout the day, whether it’s hitting the right note on a musical instrument or pushing the car door just enough to click it shut without slamming.

Now, an NIH-funded team of neuroscientists has discovered what happens in the brain when one hears an expected or “right” sound versus a “wrong” one after completing a task. It turns out that the mammalian brain is remarkably good at predicting both when a sound should happen and what it ideally ought to sound like. Any notable mismatch between that expectation and the feedback, and the hearing center of the brain reacts.

It may seem intuitive that humans and other animals have this auditory ability, but researchers didn’t know how neurons in the brain’s auditory cortex, where sound is processed, make these snap judgements to learn complex tasks. In the study published in the journal Current Biology, David Schneider, New York University, New York, set out to understand how this familiar experience really works.

To do it, Schneider and colleagues, including postdoctoral fellow Nicholas Audette, looked to mice. They are a lot easier to study in the lab than humans and, while their brains aren’t miniature versions of our own, our sensory systems share many fundamental similarities because we are both mammals.

Of course, mice don’t go around hitting home runs or opening and closing doors. So, the researchers’ first step was training the animals to complete a task akin to closing the car door. To do it, they trained the animals to push a lever with their paws in just the right way to receive a reward. They also played a distinctive tone each time the lever reached that perfect position.

After making thousands of attempts and hearing the associated sound, the mice knew just what to do—and what it should sound like when they did it right. Their studies showed that, when the researchers removed the sound, played the wrong sound, or played the correct sound at the wrong time, the mice took notice and adjusted their actions, just as you might do if you pushed a car door shut and the resulting click wasn’t right.

To find out how neurons in the auditory cortex responded to produce the observed behaviors, Schneider’s team also recorded brain activity. Intriguingly, they found that auditory neurons hardly responded when a mouse pushed the lever and heard the sound they’d learned to expect. It was only when something about the sound was “off” that their auditory neurons suddenly crackled with activity.

As the researchers explained, it seems from these studies that the mammalian auditory cortex responds not to the sounds themselves but to how those sounds match up to, or violate, expectations. When the researchers canceled the sound altogether, as might happen if you didn’t push a car door hard enough to produce the familiar click shut, activity within a select group of auditory neurons spiked right as they should have heard the sound.

Schneider’s team notes that the same brain areas and circuitry that predict and process self-generated sounds in everyday tasks also play a role in conditions such as schizophrenia, in which people may hear voices or other sounds that aren’t there. The team hopes their studies will help to explain what goes wrong—and perhaps how to help—in schizophrenia and other neural disorders. Perhaps they’ll also learn more about what goes through the healthy brain when anticipating the satisfying click of a closed door or the loud crack of a World Series home run.

Reference:

[1] Precise movement-based predictions in the mouse auditory cortex. Audette NJ, Zhou WX, Chioma A, Schneider DM. Curr Biology. 2022 Oct 24.

Links:

How Do We Hear? (National Institute on Deafness and Other Communication Disorders/NIH)

Schizophrenia (National Institute of Mental Health/NIH)

David Schneider (New York University, New York)

NIH Support: National Institute of Mental Health; National Institute on Deafness and Other Communication Disorders


A Neuronal Light Show

Posted on by Dr. Francis Collins

Credit: Chen X, Cell, 2019

These colorful lights might look like a video vignette from one of the spectacular evening light shows taking place this holiday season. But they actually aren’t. These lights are illuminating the way to a much fuller understanding of the mammalian brain.

The video features a new research method called BARseq (Barcoded Anatomy Resolved by Sequencing). Created by a team of NIH-funded researchers led by Anthony Zador, Cold Spring Harbor Laboratory, NY, BARseq enables scientists to map in a matter of weeks the location of thousands of neurons in the mouse brain with greater precision than has ever been possible before.

How does it work? With BARseq, researchers generate uniquely identifying RNA barcodes and then tag one to each individual neuron within brain tissue. As reported recently in the journal Cell, those barcodes allow them to keep track of the location of an individual cell amid millions of neurons [1]. This also enables researchers to map the tangled paths of individual neurons from one region of the mouse brain to the next.

The video shows how the researchers read the barcodes. Each twinkling light is a barcoded neuron within a thin slice of mouse brain tissue. The changing colors from frame to frame correspond to one of the four letters, or chemical bases, in RNA (A=purple, G=blue, U=yellow, and C=white). A neuron that flashes blue, purple, yellow, white is tagged with a barcode that reads GAUC, while yellow, white, white, white is UCCC.

By sequencing and reading the barcodes to distinguish among seemingly identical cells, the researchers mapped the connections of more than 3,500 neurons in a mouse’s auditory cortex, a part of the brain involved in hearing. In fact, they report they’re now able to map tens of thousands of individual neurons in a mouse in a matter of weeks.

What makes BARseq even better than the team’s previous mapping approach, called MAPseq, is its ability to read the barcodes at their original location in the brain tissue [2]. As a result, they can produce maps with much finer resolution. It’s also possible to maintain other important information about each mapped neuron’s identity and function, including the expression of its genes.

Zador reports that they’re continuing to use BARseq to produce maps of other essential areas of the mouse brain with more detail than had previously been possible. Ultimately, these maps will provide a firm foundation for better understanding of human thought, consciousness, and decision-making, along with how such mental processes get altered in conditions such as autism spectrum disorder, schizophrenia, and depression.

Here’s wishing everyone a safe and happy holiday season. It’s been a fantastic year in science, and I look forward to bringing you more cool NIH-supported research in 2020!

References:

[1] High-Throughput Mapping of Long-Range Neuronal Projection Using In Situ Sequencing. Chen X, Sun YC, Zhan H, Kebschull JM, Fischer S, Matho K, Huang ZJ, Gillis J, Zador AM. Cell. 2019 Oct 17;179(3):772-786.e19.

[2] High-Throughput Mapping of Single-Neuron Projections by Sequencing of Barcoded RNA. Kebschull JM, Garcia da Silva P, Reid AP, Peikon ID, Albeanu DF, Zador AM. Neuron. 2016 Sep 7;91(5):975-987.

Links:

Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Zador Lab (Cold Spring Harbor Laboratory, Cold Spring Harbor, NY)

NIH Support: National Institute of Neurological Disorders and Stroke; National Institute on Drug Abuse; National Cancer Institute


Singing for the Fences

Posted on by Dr. Francis Collins

Credit: NIH

I’ve sung thousands of songs in my life, mostly in the forgiving company of family and friends. But, until a few years ago, I’d never dreamed that I would have the opportunity to do a solo performance of the Star-Spangled Banner in a major league ballpark.

When I first learned that the Washington Nationals had selected me to sing the national anthem before a home game with the New York Mets on May 24, 2016, I was thrilled. But then another response emerged: yes, that would be called fear. Not only would I be singing before my biggest audience ever, I would be taking on a song that’s extremely challenging for even the most accomplished performer.

The musician in me was particularly concerned about landing the anthem’s tricky high F note on “land of the free” without screeching or going flat. So, I tracked down a voice teacher who gave me a crash course about how to breathe properly, how to project, how to stay on pitch on a high note, and how to hit the national anthem out of the park. She suggested that a good way to train is to sing the entire song with each syllable replaced by “meow.” It sounds ridiculous, but it helped—try it sometime. And then I practiced, practiced, practiced. I think the preparation paid off, but watch the video to decide for yourself!

Three years later, the scientist in me remains fascinated by what goes on in the human brain when we listen to or perform music. The NIH has even partnered with the John F. Kennedy Center for the Performing Arts to launch the Sound Health initiative to explore the role of music in health. A great many questions remain to be answered. For example, what is it that makes us enjoy singers who stay on pitch and cringe when we hear someone go sharp or flat? Why do some intervals sound pleasant and others sound grating? And, to push that line of inquiry even further, why do we tune into the pitch of people’s voices when they are speaking to help figure out if they are happy, sad, angry, and so on?

To understand more about the neuroscience of pitch, a research team, led by Bevil Conway of NIH’s National Eye Institute, used functional MRI imaging to study activity in the region of the brain involved in processing sound (the auditory cortex), both in humans and in our evolutionary relative, the macaque monkey [1]. For purposes of the study, published recently in Nature Neuroscience, pitch was defined as the harmonic sounds that we hear when listening to music.

For humans and macaques, their auditory cortices lit up comparably in response to low- and high-frequency sound. But only humans responded selectively to harmonic tones, while the macaques reacted to toneless, white noise sounds spanning the same frequency range. Based on what they found in both humans and monkeys, the researchers suspect that macaques experience music and other sounds differently than humans. They also go on to suggest that the perception of pitch must have provided some kind of evolutionary advantage for our ancestors, and has therefore apparently shaped the basic organization of the human brain.

But enough about science and back to the ballpark! In front of 33,009 pitch-sensitive Homo sapiens, I managed to sing our national anthem without audible groaning from the crowd. What an honor it was! I pass along this memory to encourage each of you to test your own pitch this Independence Day. Let’s all celebrate the birth of our great nation. Have a happy Fourth!

Reference:

[1] Divergence in the functional organization of human and macaque auditory cortex revealed by fMRI responses to harmonic tones. Norman-Haignere SV, Kanwisher N, McDermott JH, Conway BR. Nat Neurosci. 2019 Jun 10. [Epub ahead of print]

Links:

Our brains appear uniquely tuned for musical pitch (National Institute of Neurological Diseases and Stroke news release)

Sound Health: An NIH-Kennedy Center Partnership (NIH)

Bevil Conway (National Eye Institute/NIH)

NIH Support: National Institute of Neurological Diseases and Stroke; National Eye Institute; National Institute of Mental Health


Can a Mind-Reading Computer Speak for Those Who Cannot?

Posted on by Dr. Francis Collins

Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York

Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.

Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.

When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.

Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.

As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech [1]. To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.

In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.

From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.

Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.

In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.

The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!

Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.

Reference:

[1] Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.

Links:

Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.

Nima Mesgarani (Columbia University, New York)

NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


LabTV: Curious About a Mother’s Bond

Posted on by Dr. Francis Collins

Bianca JonesThe bond between a mother and her child is obviously very special. That’s true not only in humans, but in mice and other animals that feed and care for their young. But what exactly goes on in the brain of a mother when she hears her baby crying? That’s one of the fascinating questions being explored by Bianca Jones Marlin, the young neuroscience researcher featured in this LabTV video.

Currently a postdoctoral fellow at New York University School of Medicine, Marlin is particularly interested in the influence of a hormone called oxytocin, popularly referred to as the “love hormone,” on maternal behaviors. While working on her Ph.D.in the lab of Robert Froemke, Marlin tested the behavior and underlying brain responses of female mice—both mothers and non-mothers—upon hearing distress cries of young mice, which are called pups. She also examined how those interactions changed with the addition of oxytocin.

I’m pleased to report that the results of the NIH-funded work Marlin describes in her video appeared recently in the highly competitive journal Nature [1]. And what she found might strike a chord with all the mothers out there. Her studies show that oxytocin makes key portions of the mouse brain more sensitive to the cries of the pups, almost as if someone turned up the volume.

In fact, when Marlin and her colleagues delivered oxytocin to the brains (specifically, the left auditory cortexes) of mice with no pups of their own, they responded like mothers themselves! Those childless mice quickly learned to perk up and fetch pups in distress, returning them to the safety of their nests.

Marlin says her interest in neuroscience arose from her experiences growing up in a foster family. She witnessed some of her foster brothers and sisters struggling with school and learning. As an undergraduate at Saint John’s University in Queens, NY, she earned a dual bachelor’s degree in Biology and Adolescent Education before getting her license to teach 6th through 12th grade Biology. But Marlin soon decided she could have a greater impact by studying how the brain works and gaining a better understanding of the biological mechanisms involved in learning, whether in the classroom or through life experiences, such as motherhood.

Marlin welcomes the opportunity that the lab gives her to “be an explorer”—to ask deep, even ethereal, questions and devise experiments aimed at answering them. “That’s the beauty of science and research,” she says. “To be able to do that the rest of my life? I’d be very happy.”

References:

[1] Oxytocin enables maternal behaviour by balancing cortical inhibition. Marlin BJ, Mitre M, D’amour JA, Chao MV, Froemke RC. Nature. 2015 Apr 23;520(7548):499-504.

Links:

LabTV

Froemke Lab (NYU Langone)

Science Careers (National Institute of General Medical Sciences/NIH)

Careers Blog (Office of Intramural Training/NIH)

Scientific Careers at NIH

 


Next Page