Posted on by Dr. Francis Collins
These colorful lights might look like a video vignette from one of the spectacular evening light shows taking place this holiday season. But they actually aren’t. These lights are illuminating the way to a much fuller understanding of the mammalian brain.
The video features a new research method called BARseq (Barcoded Anatomy Resolved by Sequencing). Created by a team of NIH-funded researchers led by Anthony Zador, Cold Spring Harbor Laboratory, NY, BARseq enables scientists to map in a matter of weeks the location of thousands of neurons in the mouse brain with greater precision than has ever been possible before.
How does it work? With BARseq, researchers generate uniquely identifying RNA barcodes and then tag one to each individual neuron within brain tissue. As reported recently in the journal Cell, those barcodes allow them to keep track of the location of an individual cell amid millions of neurons . This also enables researchers to map the tangled paths of individual neurons from one region of the mouse brain to the next.
The video shows how the researchers read the barcodes. Each twinkling light is a barcoded neuron within a thin slice of mouse brain tissue. The changing colors from frame to frame correspond to one of the four letters, or chemical bases, in RNA (A=purple, G=blue, U=yellow, and C=white). A neuron that flashes blue, purple, yellow, white is tagged with a barcode that reads GAUC, while yellow, white, white, white is UCCC.
By sequencing and reading the barcodes to distinguish among seemingly identical cells, the researchers mapped the connections of more than 3,500 neurons in a mouse’s auditory cortex, a part of the brain involved in hearing. In fact, they report they’re now able to map tens of thousands of individual neurons in a mouse in a matter of weeks.
What makes BARseq even better than the team’s previous mapping approach, called MAPseq, is its ability to read the barcodes at their original location in the brain tissue . As a result, they can produce maps with much finer resolution. It’s also possible to maintain other important information about each mapped neuron’s identity and function, including the expression of its genes.
Zador reports that they’re continuing to use BARseq to produce maps of other essential areas of the mouse brain with more detail than had previously been possible. Ultimately, these maps will provide a firm foundation for better understanding of human thought, consciousness, and decision-making, along with how such mental processes get altered in conditions such as autism spectrum disorder, schizophrenia, and depression.
Here’s wishing everyone a safe and happy holiday season. It’s been a fantastic year in science, and I look forward to bringing you more cool NIH-supported research in 2020!
 High-Throughput Mapping of Long-Range Neuronal Projection Using In Situ Sequencing. Chen X, Sun YC, Zhan H, Kebschull JM, Fischer S, Matho K, Huang ZJ, Gillis J, Zador AM. Cell. 2019 Oct 17;179(3):772-786.e19.
 High-Throughput Mapping of Single-Neuron Projections by Sequencing of Barcoded RNA. Kebschull JM, Garcia da Silva P, Reid AP, Peikon ID, Albeanu DF, Zador AM. Neuron. 2016 Sep 7;91(5):975-987.
Zador Lab (Cold Spring Harbor Laboratory, Cold Spring Harbor, NY)
NIH Support: National Institute of Neurological Disorders and Stroke; National Institute on Drug Abuse; National Cancer Institute
Posted on by Dr. Francis Collins
Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York
Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.
Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.
When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.
Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.
As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech . To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.
In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.
From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.
Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.
In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.
The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!
Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.
 Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.
Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.
Nima Mesgarani (Columbia University, New York)
NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health
Posted on by Dr. Francis Collins
Many people with vision loss—including such gifted musicians as the late Doc Watson (my favorite guitar picker), Stevie Wonder, Andrea Bocelli, and the Blind Boys of Alabama—are thought to have supersensitive hearing. They are often much better at discriminating pitch, locating the origin of sounds, and hearing softer tones than people who can see. Now, a new animal study suggests that even a relatively brief period of simulated blindness may have the power to enhance hearing among those with normal vision.
In the study, NIH-funded researchers at the University of Maryland in College Park, and Johns Hopkins University in Baltimore, found that when they kept adult mice in complete darkness for one week, the animals’ ability to hear significantly improved . What’s more, when they examined the animals’ brains, the researchers detected changes in the connections among neurons in the part of the brain where sound is processed, the auditory cortex.