Posted on by Lawrence Tabak, D.D.S., Ph.D.
Detecting the earliest signs of Alzheimer’s disease (AD) in middle-aged people and tracking its progression over time in research studies continue to be challenging. But it is easier to do in shorter-lived mammalian models of AD, especially when paired with cutting-edge imaging tools that look across different regions of the brain. These tools can help basic researchers detect telltale early changes that might point the way to better prevention or treatment strategies in humans.
That’s the case in this technicolor snapshot showing early patterns of inflammation in the brain of a relatively young mouse bred to develop a condition similar to AD. You can see abnormally high levels of inflammation throughout the front part of the brain (orange, green) as well as in its middle part—the septum that divides the brain’s two sides. This level of inflammation suggests that the brain has been injured.
What’s striking is that no inflammation is detectable in parts of the brain rich in cholinergic neurons (pink), a distinct type of nerve cell that helps to control memory, movement, and attention. Though these neurons still remain healthy, researchers would like to know if the inflammation also will destroy them as AD progresses.
This colorful image comes from medical student Sakar Budhathoki, who earlier worked in the NIH labs of Lorna Role and David Talmage, National Institute of Neurological Disorders and Stroke (NINDS). Budhathoki, teaming with postdoctoral scientist Mala Ananth, used a specially designed wide-field scanner that sweeps across brain tissue to light up fluorescent markers and capture the image. It’s one of the scanning approaches pioneered in the Role and Talmage labs [1,2].
The two NIH labs are exploring possible links between abnormal inflammation and damage to the brain’s cholinergic signaling system. In fact, medications that target cholinergic function remain the first line of treatment for people with AD and other dementias. And yet, researchers still haven’t adequately determined when, why, and how the loss of these cholinergic neurons relates to AD.
It’s a rich area of basic research that offers hope for greater understanding of AD in the future. It’s also the source of some fascinating images like this one, which was part of the 2022 Show Us Your BRAIN! Photo and Video Contest, supported by NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative.
 NeuRegenerate: A framework for visualizing neurodegeneration. Boorboor S, Mathew S, Ananth M, Talmage D, Role LW, Kaufman AE. IEEE Trans Vis Comput Graph. 2021;Nov 10;PP.
 NeuroConstruct: 3D reconstruction and visualization of neurites in optical microscopy brain images. Ghahremani P, Boorboor S, Mirhosseini P, Gudisagar C, Ananth M, Talmage D, Role LW, Kaufman AE. IEEE Trans Vis Comput Graph. 2022 Dec;28(12):4951-4965.
Alzheimer’s Disease & Related Dementias (National Institute on Aging/NIH)
Role Lab (National Institute of Neurological Disorders and Stroke/NIH)
Talmage Lab (NINDS)
Show Us Your BRAINs! Photo and Video Contest (BRAIN Initiative)
NIH Support: National Institute of Neurological Disorders and Stroke
Posted on by Lawrence Tabak, D.D.S., Ph.D.
Happy holidays to one and all! This short science video brings to mind all those twinkling lights now brightening the night, as we mark the beginning of winter and shortest day of the year. This video also helps to remind us about the power of connection this holiday season.
It shows a motor neuron in a mouse’s primary motor cortex. In this portion of the brain, which controls voluntary movement, heavily branched neural projections interconnect, sending and receiving signals to and from distant parts of the body. A single motor neuron can receive thousands of inputs at a time from other branching sensory cells, depicted in the video as an array of blinking lights. It’s only through these connections—through open communication and cooperation—that voluntary movements are possible to navigate and enjoy our world in all its wonder. One neuron, like one person, can’t do it all alone.
This power of connection, captured in this award-winning video from the 2022 Show Us Your Brains Photo and Video contest, comes from Forrest Collman, Allen Institute for Brain Science, Seattle. The contest is part of NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative.
In the version above, we’ve taken some liberties with the original video to enhance the twinkling lights from the synaptic connections. But creating the original was quite a task. Collman sifted through reams of data from high-resolution electron microscopy imaging of the motor cortex to masterfully reconstruct this individual motor neuron and its connections.
Those data came from The Machine Intelligence from Cortical Networks (MICrONS) program, supported by the Intelligence Advanced Research Projects Activity (IARPA). It’s part of the Office of the Director of National Intelligence, one of NIH’s governmental collaborators in the BRAIN Initiative.
The MICrONS program aims to better understand the brain’s internal wiring. With this increased knowledge, researchers will develop more sophisticated machine learning algorithms for artificial intelligence applications, which will in turn advance fundamental basic science discoveries and the practice of life-saving medicine. For instance, these applications may help in the future to detect and evaluate a broad range of neural conditions, including those that affect the primary motor cortex.
Pretty cool stuff. So, as you spend this holiday season with friends and family, let this video and its twinkling lights remind you that there’s much more to the season than eating, drinking, and watching football games.
The holidays are very much about the power of connection for people of all faiths, beliefs, and traditions. It’s about taking time out from the everyday to join together to share memories of days gone by as we build new memories and stronger bonds of cooperation for the years to come. With this in mind, happy holidays to one and all.
“NIH BRAIN Initiative Unveils Detailed Atlas of the Mammalian Primary Motor Cortex,” NIH News Release, October 6, 2021
Forrest Collman (Allen Institute for Brain Science, Seattle)
Show Us Your Brains Photo and Video Contest (BRAIN Initiative)
Posted on by Lawrence Tabak, D.D.S., Ph.D.
For people who have lost the ability to speak due to a severe disability, they want to get the words out. They just can’t physically do it. But in our digital age, there is now a fascinating way to overcome such profound physical limitations. Computers are being taught to decode brain waves as a person tries to speak and then interactively translate them onto a computer screen in real time.
The latest progress, demonstrated in the video above, establishes that it’s quite possible for computers trained with the help of current artificial intelligence (AI) methods to restore a vocabulary of more than a 1,000 words for people with the mental but not physical ability to speak. That covers more than 85 percent of most day-to-day communication in English. With further refinements, the researchers say a 9,000-word vocabulary is well within reach.
The findings published in the journal Nature Communications come from a team led by Edward Chang, University of California, San Francisco . Earlier, Chang and colleagues established that this AI-enabled system could directly decode 50 full words in real time from brain waves alone in a person with paralysis trying to speak . The study is known as BRAVO, short for Brain-computer interface Restoration Of Arm and Voice.
In the latest BRAVO study, the team wanted to figure out how to condense the English language into compact units for easier decoding and expand that 50-word vocabulary. They did it in the same way we all do: by focusing not on complete words, but on the 26-letter alphabet.
The study involved a 36-year-old male with severe limb and vocal paralysis. The team designed a sentence-spelling pipeline for this individual, which enabled him to silently spell out messages using code words corresponding to each of the 26 letters in his head. As he did so, a high-density array of electrodes implanted over the brain’s sensorimotor cortex, part of the cerebral cortex, recorded his brain waves.
A sophisticated system including signal processing, speech detection, word classification, and language modeling then translated those thoughts into coherent words and complete sentences on a computer screen. This so-called speech neuroprosthesis system allows those who have lost their speech to perform roughly the equivalent of text messaging.
Chang’s team put their spelling system to the test first by asking the participant to silently reproduce a sentence displayed on a screen. They then moved on to conversations, in which the participant was asked a question and could answer freely. For instance, as in the video above, when the computer asked, “How are you today?” he responded, “I am very good.” When asked about his favorite time of year, he answered, “summertime.” An attempted hand movement signaled the computer when he was done speaking.
The computer didn’t get it exactly right every time. For instance, in the initial trials with the target sentence, “good morning,” the computer got it exactly right in one case and in another came up with “good for legs.” But, overall, their tests show that their AI device could decode with a high degree of accuracy silently spoken letters to produce sentences from a 1,152-word vocabulary at a speed of about 29 characters per minute.
On average, the spelling system got it wrong 6 percent of the time. That’s really good when you consider how common it is for errors to arise with dictation software or in any text message conversation.
Of course, much more work is needed to test this approach in many more people. They don’t yet know how individual differences or specific medical conditions might affect the outcomes. They suspect that this general approach will work for anyone so long as they remain mentally capable of thinking through and attempting to speak.
They also envision future improvements as part of their BRAVO study. For instance, it may be possible to develop a system capable of more rapid decoding of many commonly used words or phrases. Such a system could then reserve the slower spelling method for other, less common words.
But, as these results clearly demonstrate, this combination of artificial intelligence and silently controlled speech neuroprostheses to restore not just speech but meaningful communication and authentic connection between individuals who’ve lost the ability to speak and their loved ones holds fantastic potential. For that, I say BRAVO.
 Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis. Metzger SL, Liu JR, Moses DA, Dougherty ME, Seaton MP, Littlejohn KT, Chartier J, Anumanchipalli GK, Tu-CHan A, Gangly K, Chang, EF. Nature Communications (2022) 13: 6510.
 Neuroprosthesis for decoding speech in a paralyzed person with anarthria. Moses DA, Metzger SL, Liu JR, Tu-Chan A, Ganguly K, Chang EF, et al. N Engl J Med. 2021 Jul 15;385(3):217-227.
Voice, Speech, and Language (National Institute on Deafness and Other Communication Disorders/NIH)
ECoG BMI for Motor and Speech Control (BRAVO) (ClinicalTrials.gov)
Chang Lab (University of California, San Francisco)
NIH Support: National Institute on Deafness and Other Communication Disorders
Posted on by Lawrence Tabak, D.D.S., Ph.D.
If you’ve been staying up late to watch the World Series, you probably spent those nine innings hoping for superstars Bryce Harper or José Altuve to square up a fastball and send it sailing out of the yard. Long-time baseball fans like me can distinguish immediately the loud crack of a home-run swing from the dull thud of a weak grounder.
Our brains have such a fascinating ability to discern “right” sounds from “wrong” ones in just an instant. This applies not only in baseball, but in the things that we do throughout the day, whether it’s hitting the right note on a musical instrument or pushing the car door just enough to click it shut without slamming.
Now, an NIH-funded team of neuroscientists has discovered what happens in the brain when one hears an expected or “right” sound versus a “wrong” one after completing a task. It turns out that the mammalian brain is remarkably good at predicting both when a sound should happen and what it ideally ought to sound like. Any notable mismatch between that expectation and the feedback, and the hearing center of the brain reacts.
It may seem intuitive that humans and other animals have this auditory ability, but researchers didn’t know how neurons in the brain’s auditory cortex, where sound is processed, make these snap judgements to learn complex tasks. In the study published in the journal Current Biology, David Schneider, New York University, New York, set out to understand how this familiar experience really works.
To do it, Schneider and colleagues, including postdoctoral fellow Nicholas Audette, looked to mice. They are a lot easier to study in the lab than humans and, while their brains aren’t miniature versions of our own, our sensory systems share many fundamental similarities because we are both mammals.
Of course, mice don’t go around hitting home runs or opening and closing doors. So, the researchers’ first step was training the animals to complete a task akin to closing the car door. To do it, they trained the animals to push a lever with their paws in just the right way to receive a reward. They also played a distinctive tone each time the lever reached that perfect position.
After making thousands of attempts and hearing the associated sound, the mice knew just what to do—and what it should sound like when they did it right. Their studies showed that, when the researchers removed the sound, played the wrong sound, or played the correct sound at the wrong time, the mice took notice and adjusted their actions, just as you might do if you pushed a car door shut and the resulting click wasn’t right.
To find out how neurons in the auditory cortex responded to produce the observed behaviors, Schneider’s team also recorded brain activity. Intriguingly, they found that auditory neurons hardly responded when a mouse pushed the lever and heard the sound they’d learned to expect. It was only when something about the sound was “off” that their auditory neurons suddenly crackled with activity.
As the researchers explained, it seems from these studies that the mammalian auditory cortex responds not to the sounds themselves but to how those sounds match up to, or violate, expectations. When the researchers canceled the sound altogether, as might happen if you didn’t push a car door hard enough to produce the familiar click shut, activity within a select group of auditory neurons spiked right as they should have heard the sound.
Schneider’s team notes that the same brain areas and circuitry that predict and process self-generated sounds in everyday tasks also play a role in conditions such as schizophrenia, in which people may hear voices or other sounds that aren’t there. The team hopes their studies will help to explain what goes wrong—and perhaps how to help—in schizophrenia and other neural disorders. Perhaps they’ll also learn more about what goes through the healthy brain when anticipating the satisfying click of a closed door or the loud crack of a World Series home run.
 Precise movement-based predictions in the mouse auditory cortex. Audette NJ, Zhou WX, Chioma A, Schneider DM. Curr Biology. 2022 Oct 24.
How Do We Hear? (National Institute on Deafness and Other Communication Disorders/NIH)
Schizophrenia (National Institute of Mental Health/NIH)
David Schneider (New York University, New York)
NIH Support: National Institute of Mental Health; National Institute on Deafness and Other Communication Disorders
Previous Page Next Page