How the Brain Differentiates the ‘Click,’ ‘Crack,’ or ‘Thud’ of Everyday Tasks
Posted on by Lawrence Tabak, D.D.S., Ph.D.
If you’ve been staying up late to watch the World Series, you probably spent those nine innings hoping for superstars Bryce Harper or José Altuve to square up a fastball and send it sailing out of the yard. Long-time baseball fans like me can distinguish immediately the loud crack of a home-run swing from the dull thud of a weak grounder.
Our brains have such a fascinating ability to discern “right” sounds from “wrong” ones in just an instant. This applies not only in baseball, but in the things that we do throughout the day, whether it’s hitting the right note on a musical instrument or pushing the car door just enough to click it shut without slamming.
Now, an NIH-funded team of neuroscientists has discovered what happens in the brain when one hears an expected or “right” sound versus a “wrong” one after completing a task. It turns out that the mammalian brain is remarkably good at predicting both when a sound should happen and what it ideally ought to sound like. Any notable mismatch between that expectation and the feedback, and the hearing center of the brain reacts.
It may seem intuitive that humans and other animals have this auditory ability, but researchers didn’t know how neurons in the brain’s auditory cortex, where sound is processed, make these snap judgements to learn complex tasks. In the study published in the journal Current Biology, David Schneider, New York University, New York, set out to understand how this familiar experience really works.
To do it, Schneider and colleagues, including postdoctoral fellow Nicholas Audette, looked to mice. They are a lot easier to study in the lab than humans and, while their brains aren’t miniature versions of our own, our sensory systems share many fundamental similarities because we are both mammals.
Of course, mice don’t go around hitting home runs or opening and closing doors. So, the researchers’ first step was training the animals to complete a task akin to closing the car door. To do it, they trained the animals to push a lever with their paws in just the right way to receive a reward. They also played a distinctive tone each time the lever reached that perfect position.
After making thousands of attempts and hearing the associated sound, the mice knew just what to do—and what it should sound like when they did it right. Their studies showed that, when the researchers removed the sound, played the wrong sound, or played the correct sound at the wrong time, the mice took notice and adjusted their actions, just as you might do if you pushed a car door shut and the resulting click wasn’t right.
To find out how neurons in the auditory cortex responded to produce the observed behaviors, Schneider’s team also recorded brain activity. Intriguingly, they found that auditory neurons hardly responded when a mouse pushed the lever and heard the sound they’d learned to expect. It was only when something about the sound was “off” that their auditory neurons suddenly crackled with activity.
As the researchers explained, it seems from these studies that the mammalian auditory cortex responds not to the sounds themselves but to how those sounds match up to, or violate, expectations. When the researchers canceled the sound altogether, as might happen if you didn’t push a car door hard enough to produce the familiar click shut, activity within a select group of auditory neurons spiked right as they should have heard the sound.
Schneider’s team notes that the same brain areas and circuitry that predict and process self-generated sounds in everyday tasks also play a role in conditions such as schizophrenia, in which people may hear voices or other sounds that aren’t there. The team hopes their studies will help to explain what goes wrong—and perhaps how to help—in schizophrenia and other neural disorders. Perhaps they’ll also learn more about what goes through the healthy brain when anticipating the satisfying click of a closed door or the loud crack of a World Series home run.
 Precise movement-based predictions in the mouse auditory cortex. Audette NJ, Zhou WX, Chioma A, Schneider DM. Curr Biology. 2022 Oct 24.
How Do We Hear? (National Institute on Deafness and Other Communication Disorders/NIH)
Schizophrenia (National Institute of Mental Health/NIH)
David Schneider (New York University, New York)
NIH Support: National Institute of Mental Health; National Institute on Deafness and Other Communication Disorders
A Neuronal Light Show
Posted on by Dr. Francis Collins
These colorful lights might look like a video vignette from one of the spectacular evening light shows taking place this holiday season. But they actually aren’t. These lights are illuminating the way to a much fuller understanding of the mammalian brain.
The video features a new research method called BARseq (Barcoded Anatomy Resolved by Sequencing). Created by a team of NIH-funded researchers led by Anthony Zador, Cold Spring Harbor Laboratory, NY, BARseq enables scientists to map in a matter of weeks the location of thousands of neurons in the mouse brain with greater precision than has ever been possible before.
How does it work? With BARseq, researchers generate uniquely identifying RNA barcodes and then tag one to each individual neuron within brain tissue. As reported recently in the journal Cell, those barcodes allow them to keep track of the location of an individual cell amid millions of neurons . This also enables researchers to map the tangled paths of individual neurons from one region of the mouse brain to the next.
The video shows how the researchers read the barcodes. Each twinkling light is a barcoded neuron within a thin slice of mouse brain tissue. The changing colors from frame to frame correspond to one of the four letters, or chemical bases, in RNA (A=purple, G=blue, U=yellow, and C=white). A neuron that flashes blue, purple, yellow, white is tagged with a barcode that reads GAUC, while yellow, white, white, white is UCCC.
By sequencing and reading the barcodes to distinguish among seemingly identical cells, the researchers mapped the connections of more than 3,500 neurons in a mouse’s auditory cortex, a part of the brain involved in hearing. In fact, they report they’re now able to map tens of thousands of individual neurons in a mouse in a matter of weeks.
What makes BARseq even better than the team’s previous mapping approach, called MAPseq, is its ability to read the barcodes at their original location in the brain tissue . As a result, they can produce maps with much finer resolution. It’s also possible to maintain other important information about each mapped neuron’s identity and function, including the expression of its genes.
Zador reports that they’re continuing to use BARseq to produce maps of other essential areas of the mouse brain with more detail than had previously been possible. Ultimately, these maps will provide a firm foundation for better understanding of human thought, consciousness, and decision-making, along with how such mental processes get altered in conditions such as autism spectrum disorder, schizophrenia, and depression.
Here’s wishing everyone a safe and happy holiday season. It’s been a fantastic year in science, and I look forward to bringing you more cool NIH-supported research in 2020!
 High-Throughput Mapping of Long-Range Neuronal Projection Using In Situ Sequencing. Chen X, Sun YC, Zhan H, Kebschull JM, Fischer S, Matho K, Huang ZJ, Gillis J, Zador AM. Cell. 2019 Oct 17;179(3):772-786.e19.
 High-Throughput Mapping of Single-Neuron Projections by Sequencing of Barcoded RNA. Kebschull JM, Garcia da Silva P, Reid AP, Peikon ID, Albeanu DF, Zador AM. Neuron. 2016 Sep 7;91(5):975-987.
Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)
Zador Lab (Cold Spring Harbor Laboratory, Cold Spring Harbor, NY)
NIH Support: National Institute of Neurological Disorders and Stroke; National Institute on Drug Abuse; National Cancer Institute
Can a Mind-Reading Computer Speak for Those Who Cannot?
Posted on by Dr. Francis Collins
Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York
Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.
Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.
When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.
Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.
As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech . To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.
In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.
From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.
Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.
In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.
The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!
Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.
 Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.
Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.
Nima Mesgarani (Columbia University, New York)
NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health