neuroscience
How Neurons Make Connections
Posted on by Lawrence Tabak, D.D.S., Ph.D.

For many people, they are tiny pests. These fruit flies that sometimes hover over a bowl of peaches or a bunch of bananas. But for a dedicated community of researchers, fruit flies are an excellent model organism and source of information into how neurons self-organize during the insect’s early development and form a complex, fully functioning nervous system.
That’s the scientific story on display in this beautiful image of a larval fruit fly’s developing nervous system. Its subtext is: fundamental discoveries in the fruit fly, known in textbooks as Drosophila melanogaster, provide basic clues into the development and repair of the human nervous system. That’s because humans and fruit flies, though very distantly related through the millennia, still share many genes involved in their growth and development. In fact, 60 percent of the Drosophila genome is identical to ours.
Once hatched, as shown in this image, a larval fly uses neurons (magenta) to sense its environment. These include neurons that sense the way its body presses against the surrounding terrain, as needed to coordinate the movements of its segmented body parts and crawl in all directions.
This same set of neurons will generate painful sensations, such as the attack of a parasitic wasp. Paintbrush-like neurons in the fly’s developing head (magenta, left side) allow the insect to taste the sweetness of a peach or banana.
There is a second subtype of neurons, known as proprioceptors (green). These neurons will give the young fly its “sixth sense” understanding about where its body is positioned in space. The complete collection of developing neurons shown here are responsible for all the fly’s primary sensations. They also send these messages on to the insect’s central nervous system, which contains thousands of other neurons that are hidden from view.
Emily Heckman, now a postdoctoral researcher at the Michigan Neuroscience Institute, University of Michigan, Ann Arbor, captured this image during her graduate work in the lab of Chris Doe, University of Oregon, Eugene. For her keen eye, she received a trainee/early-career BioArt Award from the Federation of American Societies for Experimental Biology (FASEB), which each year celebrates the art of science.
The image is one of many from a much larger effort in the Doe lab that explores the way neurons that will partner find each other and link up to drive development. Heckman and Doe also wanted to know how neurons in the developing brain interconnect into integrated neural networks, or circuits, and respond when something goes wrong. To find out, they disrupted sensory neurons or forced them to take alternate paths and watched to see what would happen.
As published in the journal eLife [1], the system has an innate plasticity. Their findings show that developing sensory neurons instruct one another on how to meet up just right. If one suddenly takes an alternate route, its partner can still reach out and make the connection. Once an electrically active neural connection, or synapse, is made, the neural signals themselves slow or stop further growth. This kind of adaptation and crosstalk between neurons takes place only during a particular critical window during development.
Heckman says part of what she enjoys about the image is how it highlights that many sensory neurons develop simultaneously and in a coordinated process. What’s also great about visualizing these events in the fly embryo is that she and other researchers can track many individual neurons from the time they’re budding stem cells to when they become a fully functional and interconnected neural circuit.
So, the next time you see fruit flies hovering in the kitchen, just remember there’s more to their swarm than you think. Our lessons learned studying them will help point researchers toward new ways in people to restore or rebuild neural connections after devastating disruptions from injury or disease.
Reference:
Presynaptic contact and activity opposingly regulate postsynaptic dendrite outgrowth. Heckman EL, Doe CQ. Elife. 2022 Nov 30;11:e82093.
Links:
Research Organisms (National Institute of General Medical Sciences/NIH)
Doe Lab (University of Oregon, Eugene)
Emily Heckman (University of Michigan, Ann Arbor)
BioArt Awards (Federation of American Societies for Experimental Biology, Rockville, MD)
NIH Support: Eunice Kennedy Shriver National Institute of Child Health and Human Development
Celebrating the Power of Connection This Holiday Season
Posted on by Lawrence Tabak, D.D.S., Ph.D.
Happy holidays to one and all! This short science video brings to mind all those twinkling lights now brightening the night, as we mark the beginning of winter and shortest day of the year. This video also helps to remind us about the power of connection this holiday season.
It shows a motor neuron in a mouse’s primary motor cortex. In this portion of the brain, which controls voluntary movement, heavily branched neural projections interconnect, sending and receiving signals to and from distant parts of the body. A single motor neuron can receive thousands of inputs at a time from other branching sensory cells, depicted in the video as an array of blinking lights. It’s only through these connections—through open communication and cooperation—that voluntary movements are possible to navigate and enjoy our world in all its wonder. One neuron, like one person, can’t do it all alone.
This power of connection, captured in this award-winning video from the 2022 Show Us Your Brains Photo and Video contest, comes from Forrest Collman, Allen Institute for Brain Science, Seattle. The contest is part of NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative.
In the version above, we’ve taken some liberties with the original video to enhance the twinkling lights from the synaptic connections. But creating the original was quite a task. Collman sifted through reams of data from high-resolution electron microscopy imaging of the motor cortex to masterfully reconstruct this individual motor neuron and its connections.
Those data came from The Machine Intelligence from Cortical Networks (MICrONS) program, supported by the Intelligence Advanced Research Projects Activity (IARPA). It’s part of the Office of the Director of National Intelligence, one of NIH’s governmental collaborators in the BRAIN Initiative.
The MICrONS program aims to better understand the brain’s internal wiring. With this increased knowledge, researchers will develop more sophisticated machine learning algorithms for artificial intelligence applications, which will in turn advance fundamental basic science discoveries and the practice of life-saving medicine. For instance, these applications may help in the future to detect and evaluate a broad range of neural conditions, including those that affect the primary motor cortex.
Pretty cool stuff. So, as you spend this holiday season with friends and family, let this video and its twinkling lights remind you that there’s much more to the season than eating, drinking, and watching football games.
The holidays are very much about the power of connection for people of all faiths, beliefs, and traditions. It’s about taking time out from the everyday to join together to share memories of days gone by as we build new memories and stronger bonds of cooperation for the years to come. With this in mind, happy holidays to one and all.
Links:
“NIH BRAIN Initiative Unveils Detailed Atlas of the Mammalian Primary Motor Cortex,” NIH News Release, October 6, 2021
Forrest Collman (Allen Institute for Brain Science, Seattle)
Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)
Show Us Your Brains Photo and Video Contest (BRAIN Initiative)
From Brain Waves to Real-Time Text Messaging
Posted on by Lawrence Tabak, D.D.S., Ph.D.
For people who have lost the ability to speak due to a severe disability, they want to get the words out. They just can’t physically do it. But in our digital age, there is now a fascinating way to overcome such profound physical limitations. Computers are being taught to decode brain waves as a person tries to speak and then interactively translate them onto a computer screen in real time.
The latest progress, demonstrated in the video above, establishes that it’s quite possible for computers trained with the help of current artificial intelligence (AI) methods to restore a vocabulary of more than a 1,000 words for people with the mental but not physical ability to speak. That covers more than 85 percent of most day-to-day communication in English. With further refinements, the researchers say a 9,000-word vocabulary is well within reach.
The findings published in the journal Nature Communications come from a team led by Edward Chang, University of California, San Francisco [1]. Earlier, Chang and colleagues established that this AI-enabled system could directly decode 50 full words in real time from brain waves alone in a person with paralysis trying to speak [2]. The study is known as BRAVO, short for Brain-computer interface Restoration Of Arm and Voice.
In the latest BRAVO study, the team wanted to figure out how to condense the English language into compact units for easier decoding and expand that 50-word vocabulary. They did it in the same way we all do: by focusing not on complete words, but on the 26-letter alphabet.
The study involved a 36-year-old male with severe limb and vocal paralysis. The team designed a sentence-spelling pipeline for this individual, which enabled him to silently spell out messages using code words corresponding to each of the 26 letters in his head. As he did so, a high-density array of electrodes implanted over the brain’s sensorimotor cortex, part of the cerebral cortex, recorded his brain waves.
A sophisticated system including signal processing, speech detection, word classification, and language modeling then translated those thoughts into coherent words and complete sentences on a computer screen. This so-called speech neuroprosthesis system allows those who have lost their speech to perform roughly the equivalent of text messaging.
Chang’s team put their spelling system to the test first by asking the participant to silently reproduce a sentence displayed on a screen. They then moved on to conversations, in which the participant was asked a question and could answer freely. For instance, as in the video above, when the computer asked, “How are you today?” he responded, “I am very good.” When asked about his favorite time of year, he answered, “summertime.” An attempted hand movement signaled the computer when he was done speaking.
The computer didn’t get it exactly right every time. For instance, in the initial trials with the target sentence, “good morning,” the computer got it exactly right in one case and in another came up with “good for legs.” But, overall, their tests show that their AI device could decode with a high degree of accuracy silently spoken letters to produce sentences from a 1,152-word vocabulary at a speed of about 29 characters per minute.
On average, the spelling system got it wrong 6 percent of the time. That’s really good when you consider how common it is for errors to arise with dictation software or in any text message conversation.
Of course, much more work is needed to test this approach in many more people. They don’t yet know how individual differences or specific medical conditions might affect the outcomes. They suspect that this general approach will work for anyone so long as they remain mentally capable of thinking through and attempting to speak.
They also envision future improvements as part of their BRAVO study. For instance, it may be possible to develop a system capable of more rapid decoding of many commonly used words or phrases. Such a system could then reserve the slower spelling method for other, less common words.
But, as these results clearly demonstrate, this combination of artificial intelligence and silently controlled speech neuroprostheses to restore not just speech but meaningful communication and authentic connection between individuals who’ve lost the ability to speak and their loved ones holds fantastic potential. For that, I say BRAVO.
References:
[1] Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis. Metzger SL, Liu JR, Moses DA, Dougherty ME, Seaton MP, Littlejohn KT, Chartier J, Anumanchipalli GK, Tu-CHan A, Gangly K, Chang, EF. Nature Communications (2022) 13: 6510.
[2] Neuroprosthesis for decoding speech in a paralyzed person with anarthria. Moses DA, Metzger SL, Liu JR, Tu-Chan A, Ganguly K, Chang EF, et al. N Engl J Med. 2021 Jul 15;385(3):217-227.
Links:
Voice, Speech, and Language (National Institute on Deafness and Other Communication Disorders/NIH)
ECoG BMI for Motor and Speech Control (BRAVO) (ClinicalTrials.gov)
Chang Lab (University of California, San Francisco)
NIH Support: National Institute on Deafness and Other Communication Disorders
How the Brain Differentiates the ‘Click,’ ‘Crack,’ or ‘Thud’ of Everyday Tasks
Posted on by Lawrence Tabak, D.D.S., Ph.D.

If you’ve been staying up late to watch the World Series, you probably spent those nine innings hoping for superstars Bryce Harper or José Altuve to square up a fastball and send it sailing out of the yard. Long-time baseball fans like me can distinguish immediately the loud crack of a home-run swing from the dull thud of a weak grounder.
Our brains have such a fascinating ability to discern “right” sounds from “wrong” ones in just an instant. This applies not only in baseball, but in the things that we do throughout the day, whether it’s hitting the right note on a musical instrument or pushing the car door just enough to click it shut without slamming.
Now, an NIH-funded team of neuroscientists has discovered what happens in the brain when one hears an expected or “right” sound versus a “wrong” one after completing a task. It turns out that the mammalian brain is remarkably good at predicting both when a sound should happen and what it ideally ought to sound like. Any notable mismatch between that expectation and the feedback, and the hearing center of the brain reacts.
It may seem intuitive that humans and other animals have this auditory ability, but researchers didn’t know how neurons in the brain’s auditory cortex, where sound is processed, make these snap judgements to learn complex tasks. In the study published in the journal Current Biology, David Schneider, New York University, New York, set out to understand how this familiar experience really works.
To do it, Schneider and colleagues, including postdoctoral fellow Nicholas Audette, looked to mice. They are a lot easier to study in the lab than humans and, while their brains aren’t miniature versions of our own, our sensory systems share many fundamental similarities because we are both mammals.
Of course, mice don’t go around hitting home runs or opening and closing doors. So, the researchers’ first step was training the animals to complete a task akin to closing the car door. To do it, they trained the animals to push a lever with their paws in just the right way to receive a reward. They also played a distinctive tone each time the lever reached that perfect position.
After making thousands of attempts and hearing the associated sound, the mice knew just what to do—and what it should sound like when they did it right. Their studies showed that, when the researchers removed the sound, played the wrong sound, or played the correct sound at the wrong time, the mice took notice and adjusted their actions, just as you might do if you pushed a car door shut and the resulting click wasn’t right.
To find out how neurons in the auditory cortex responded to produce the observed behaviors, Schneider’s team also recorded brain activity. Intriguingly, they found that auditory neurons hardly responded when a mouse pushed the lever and heard the sound they’d learned to expect. It was only when something about the sound was “off” that their auditory neurons suddenly crackled with activity.
As the researchers explained, it seems from these studies that the mammalian auditory cortex responds not to the sounds themselves but to how those sounds match up to, or violate, expectations. When the researchers canceled the sound altogether, as might happen if you didn’t push a car door hard enough to produce the familiar click shut, activity within a select group of auditory neurons spiked right as they should have heard the sound.
Schneider’s team notes that the same brain areas and circuitry that predict and process self-generated sounds in everyday tasks also play a role in conditions such as schizophrenia, in which people may hear voices or other sounds that aren’t there. The team hopes their studies will help to explain what goes wrong—and perhaps how to help—in schizophrenia and other neural disorders. Perhaps they’ll also learn more about what goes through the healthy brain when anticipating the satisfying click of a closed door or the loud crack of a World Series home run.
Reference:
[1] Precise movement-based predictions in the mouse auditory cortex. Audette NJ, Zhou WX, Chioma A, Schneider DM. Curr Biology. 2022 Oct 24.
Links:
How Do We Hear? (National Institute on Deafness and Other Communication Disorders/NIH)
Schizophrenia (National Institute of Mental Health/NIH)
David Schneider (New York University, New York)
NIH Support: National Institute of Mental Health; National Institute on Deafness and Other Communication Disorders
The Amazing Brain: Where Thoughts Trigger Body Movement
Posted on by Lawrence Tabak, D.D.S., Ph.D.

You’re looking at a section of a mammalian motor cortex (left), the part of the brain where thoughts trigger our body movements. Part of the section is also shown (right) in higher resolution to help you see the intricate details.
These views are incredibly detailed, and they also can’t be produced on a microscope or any current state-of-the-art imaging device. They were created on a supercomputer. Researchers input vast amounts of data covering the activity of the motor cortex to model this highly detailed and scientifically accurate digital simulation.
The vertical section (left) shows a circuit within a column of motor neurons. The neurons run from the top, where the brain meets the skull, downward to the point that the motor cortex connects with other brain areas.
The various colors represent different layers of the motor cortex, and the bright spots show where motor neurons are firing. Notice the thread-like extensions of the motor neurons, some of which double back to connect cells from one layer with others some distance away. All this back and forth makes it appear as though the surface is unraveling.
This unique imaging was part of this year’s Show Us Your Brain Photo and Video contest, supported by NIH’s Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative. Nicolas Antille, an expert in turning scientific data into accurate and compelling visuals, created the images using a scientific model developed in the lab of Salvador Dura-Bernal, SUNY Downstate Health Sciences University, Brooklyn, NY. In the Dura-Bernal lab, scientists develop software and highly detailed computational models of neural circuits to better understand how they give rise to different brain functions and behavior [1].
Antille’s images make the motor neurons look densely packed, but in life the density would be five times as much. Antille has paused the computer simulation at a resolution that he found scientifically and visually interesting. But the true interconnections among neurons, or circuits, inside a real brain—even a small portion of a real brain—are more complex than the most powerful computers today can fully process.
While Antille is invested in revealing brain circuits as close to reality as possible, he also has the mind of an artist. He works with the subtle interaction of light with these cells to show how many individual neurons form this much larger circuit. Here’s more of his artistry at work. Antille wants to invite us all to ponder—even if only for a few moments—the wondrous beauty of the mammalian brain, including this remarkable place where thoughts trigger movements.
Reference:
[1] NetPyNE, a tool for data-driven multiscale modeling of brain circuits. Dura-Bernal S, Suter BA, Gleeson P, Cantarelli M, Quintana A, Rodriguez F, Kedziora DJ, Chadderdon GL, Kerr CC, Neymotin SA, McDougal RA, Hines M, Shepherd GM, Lytton WW. Elife. 2019 Apr 26;8:e44494.
Links:
Dura-Bernal Lab (State University of New York Downstate, Brooklyn)
Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)
Show Us Your BRAINs Photo & Video Contest (BRAIN Initiative)
NIH Support: National Institute of Biomedical Imaging and Bioengineering; National Institute of Neurological Disorders and Stroke; BRAIN Initiative
The Amazing Brain: Tight-Knit Connections
Posted on by Lawrence Tabak, D.D.S., Ph.D.

You’ve likely seen pictures of a human brain showing its smooth, folded outer layer, known as the cerebral cortex. Maybe you’ve also seen diagrams highlighting some of the brain’s major internal, or subcortical, structures.
These familiar representations, however, overlook the brain’s intricate internal wiring that power our thoughts and actions. This wiring consists of tightly bundled neural projections, called fiber tracts, that connect different parts of the brain into an integrated neural communications network.
The actual patterns of these fiber tracts are represented here and serve as the featured attraction in this award-winning image from the 2022 Show Us Your BRAINs Photo and Video contest. The contest is supported by NIH’s Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative.
Let’s take a closer look. At the center of the brain, you see some of the major subcortical structures: hippocampus (orange), amygdala (pink), putamen (magenta), caudate nucleus (purple), and nucleus accumbens (green). The fiber tracts are presented as colorful, yarn-like projections outside of those subcortical and other brain structures. The various colors, like a wiring diagram, distinguish the different fiber tracts and their specific connections.
This award-winning atlas of brain connectivity comes from Sahar Ahmad, Ye Wu, and Pew-Thian Yap, The University of North Carolina, Chapel Hill. The UNC Chapel Hill team produced this image using a non-invasive technique called diffusion MRI tractography. It’s an emerging approach with many new possibilities for neuroscience and the clinic [1]. Ahmad’s team is putting it to work to map the brain’s many neural connections and how they change across the human lifespan.
In fact, the connectivity atlas you see here isn’t from a single human brain. It’s actually a compilation of images of the brains of multiple 30-year-olds. The researchers are using this brain imaging approach to visualize changes in the brain and its fiber tracts as people grow, develop, and mature from infancy into old age.
Sahar says their comparisons of such images show that early in life, many dynamic changes occur in the brain’s fiber tracts. Once a person reaches young adulthood, the connective wiring tends to stabilize until old age, when fiber tracts begin to break down. These and other similarly precise atlases of the human brain promise to reveal fascinating insights into brain organization and the functional dynamics of its architecture, now and in the future.
Reference:
[1] Diffusion MRI fiber tractography of the brain. Jeurissen B, Descoteaux M, Mori S, Leemans A. NMR Biomed. 2019 Apr;32(4):e3785.
Links:
Brain Basics: Know Your Brain (National Institute of Neurological Disorders and Stroke/NIH)
Sahar Ahmad (The University of North Carolina, Chapel Hill)
Ye Wu (The University of North Carolina, Chapel Hill)
Pew-Thian Yap (The University of North Carolina, Chapel Hill)
Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)
Show Us Your BRAINs Photo & Video Contest (BRAIN Initiative)
NIH Support: BRAIN Initiative; National Institute of Mental Health
The Amazing Brain: Capturing Neurons in Action
Posted on by Lawrence Tabak, D.D.S., Ph.D.
With today’s powerful imaging tools, neuroscientists can monitor the firing and function of many distinct neurons in our brains, even while we move freely about. They also possess another set of tools to capture remarkable, high-resolution images of the brain’s many thousands of individual neurons, tracing the form of each intricate branch of their tree-like structures.
Most brain imaging approaches don’t capture neural form and function at once. Yet that’s precisely what you’re seeing in this knockout of a movie, another winner in the Show Us Your BRAINs! Photo and Video Contest, supported by NIH’s Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative.
This first-of-its kind look into the mammalian brain produced by Andreas Tolias, Baylor College of Medicine, Houston, and colleagues features about 200 neurons in the visual cortex, which receives and processes visual information. First, you see a colorful, tightly packed network of neurons. Then, those neurons, which were colorized by the researchers in vibrant pinks, reds, blues, and greens, pull apart to reveal their finely detailed patterns and shapes. Throughout the video, you can see neural activity, which appears as flashes of white that resemble lightning bolts.
Making this movie was a multi-step process. First, the Tolias group presented laboratory mice with a series of visual cues, using a functional imaging approach called two-photon calcium imaging to record the electrical activity of individual neurons. While this technique allowed the researchers to pinpoint the precise locations and activity of each individual neuron in the visual cortex, they couldn’t zoom in to see their precise structures.
So, the Baylor team sent the mice to colleagues Nuno da Costa and Clay Reid, Allen Institute for Brain Science, Seattle, who had the needed electron microscopes and technical expertise to zoom in on these structures. Their data allowed collaborator Sebastian Seung’s team, Princeton University, Princeton, NJ, to trace individual neurons in the visual cortex along their circuitous paths. Finally, they used sophisticated machine learning algorithms to carefully align the two imaging datasets and produce this amazing movie.
This research was supported by Intelligence Advanced Research Projects Activity (IARPA), part of the Office of the Director of National Intelligence. The IARPA is one of NIH’s governmental collaborators in the BRAIN Initiative.
Tolias and team already are making use of their imaging data to learn more about the precise ways in which individual neurons and groups of neurons in the mouse visual cortex integrate visual inputs to produce a coherent view of the animals’ surroundings. They’ve also collected an even-larger data set, scaling their approach up to tens of thousands of neurons. Those data are now freely available to other neuroscientists to help advance their work. As researchers make use of these and similar data, this union of neural form and function will surely yield new high-resolution discoveries about the mammalian brain.
Links:
Tolias Lab (Baylor College of Medicine, Houston)
Nuno da Costa (Allen Institute for Brain Science, Seattle)
R. Clay Reid (Allen Institute)
H. Sebastian Seung (Princeton University, Princeton, NJ)
Machine Intelligence from Cortical Networks (MICrONS) Explorer
Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)
Show Us Your BRAINs Photo & Video Contest (BRAIN Initiative)
NIH Support: BRAIN Initiative; Common Fund
The Amazing Brain: Seeing Two Memories at Once
Posted on by Lawrence Tabak, D.D.S., Ph.D.

The NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative is revolutionizing our understanding of the human brain. As described in the initiative’s name, the development of innovative imaging technologies will enable researchers to see the brain in new and increasingly dynamic ways. Each year, the initiative celebrates some standout and especially creative examples of such advances in the “Show Us Your BRAINs! Photo & Video Contest. During most of August, I’ll share some of the most eye-catching developments in our blog series, The Amazing Brain.
In this fascinating image, you’re seeing two stored memories, which scientists call engrams, in the hippocampus region of a mouse’s brain. The engrams show the neural intersection of a good memory (green) and a bad memory (pink). You can also see the nuclei of many neurons (blue), including nearby neurons not involved in the memory formation.
This award-winning image was produced by Stephanie Grella in the lab of NIH-supported neuroscientist Steve Ramirez, Boston University, MA. It’s also not the first time that the blog has featured Grella’s technical artistry. Grella, who will soon launch her own lab at Loyola University, Chicago, previously captured what a single memory looks like.
To capture two memories at once, Grella relied on a technology known as optogenetics. This powerful method allows researchers to genetically engineer neurons and selectively activate them in laboratory mice using blue light. In this case, Grella used a harmless virus to label neurons involved in recording a positive experience with a light-sensitive molecule, known as an opsin. Another molecular label was used to make those same cells appear green when activated.
After any new memory is formed, there’s a period of up to about 24 hours during which the memory is malleable. Then, the memory tends to stabilize. But with each retrieval, the memory can be modified as it restabilizes, a process known as memory reconsolidation.
Grella and team decided to try to use memory reconsolidation to their advantage to neutralize an existing fear. To do this, they placed their mice in an environment that had previously startled them. When a mouse was retrieving a fearful memory (pink), the researchers activated with light associated with the positive memory (green), which for these particular mice consisted of positive interactions with other mice. The aim was to override or disrupt the fearful memory.
As shown by the green all throughout the image, the experiment worked. While the mice still showed some traces of the fearful memory (pink), Grella explained that the specific cells that were the focus of her study shifted to the positive memory (green).
What’s perhaps even more telling is that the evidence suggests the mice didn’t just trade one memory for another. Rather, it appears that activating a positive memory actually suppressed or neutralized the animal’s fearful memory. The hope is that this approach might one day inspire methods to help people overcome negative and unwanted memories, such as those that play a role in post-traumatic stress disorder (PTSD) and other mental health issues.
Links:
Stephanie Grella (Boston University, MA)
Ramirez Group (Boston University)
Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)
Show Us Your BRAINs Photo & Video Contest (BRAIN Initiative)
NIH Support: BRAIN Initiative; Common Fund
Human Brain Compresses Working Memories into Low-Res ‘Summaries’
Posted on by Lawrence Tabak, D.D.S., Ph.D.

You have probably done it already a few times today. Paused to remember a password, a shopping list, a phone number, or maybe the score to last night’s ballgame. The ability to store and recall needed information, called working memory, is essential for most of the human brain’s higher cognitive processes.
Researchers are still just beginning to piece together how working memory functions. But recently, NIH-funded researchers added an intriguing new piece to this neurobiological puzzle: how visual working memories are “formatted” and stored in the brain.
The findings, published in the journal Neuron, show that the visual cortex—the brain’s primary region for receiving, integrating, and processing visual information from the eye’s retina—acts more like a blackboard than a camera. That is, the visual cortex doesn’t photograph all the complex details of a visual image, such as the color of paper on which your password is written or the precise series of lines that make up the letters. Instead, it recodes visual information into something more like simple chalkboard sketches.
The discovery suggests that those pared down, low-res representations serve as a kind of abstract summary, capturing the relevant information while discarding features that aren’t relevant to the task at hand. It also shows that different visual inputs, such as spatial orientation and motion, may be stored in virtually identical, shared memory formats.
The new study, from Clayton Curtis and Yuna Kwak, New York University, New York, builds upon a known fundamental aspect of working memory. Many years ago, it was determined that the human brain tends to recode visual information. For instance, if passed a 10-digit phone number on a card, the visual information gets recoded and stored in the brain as the sounds of the numbers being read aloud.
Curtis and Kwak wanted to learn more about how the brain formats representations of working memory in patterns of brain activity. To find out, they measured brain activity with functional magnetic resonance imaging (fMRI) while participants used their visual working memory.
In each test, study participants were asked to remember a visual stimulus presented to them for 12 seconds and then make a memory-based judgment on what they’d just seen. In some trials, as shown in the image above, participants were shown a tilted grating, a series of black and white lines oriented at a particular angle. In others, they observed a cloud of dots, all moving in a direction to represent those same angles. After a short break, participants were asked to recall and precisely indicate the angle of the grating’s tilt or the dot cloud’s motion as accurately as possible.
It turned out that either visual stimulus—the grating or moving dots—resulted in the same patterns of neural activity in the visual cortex and parietal cortex. The parietal cortex is a part of the brain used in memory processing and storage.
These two distinct visual memories carrying the same relevant information seemed to have been recoded into a shared abstract memory format. As a result, the pattern of brain activity trained to recall motion direction was indistinguishable from that trained to recall the grating orientation.
This result indicated that only the task-relevant features of the visual stimuli had been extracted and recoded into a shared memory format. But Curtis and Kwak wondered whether there might be more to this finding.
To take a closer look, they used a sophisticated model that allowed them to project the three-dimensional patterns of brain activity into a more-informative, two-dimensional representation of visual space. And, indeed, their analysis of the data revealed a line-like pattern, similar to a chalkboard sketch that’s oriented at the relevant angles.
The findings suggest that participants weren’t actually remembering the grating or a complex cloud of moving dots at all. Instead, they’d compressed the images into a line representing the angle that they’d been asked to remember.
Many questions remain about how remembering a simple angle, a relatively straightforward memory formation, will translate to the more-complex sets of information stored in our working memory. On a technical level, though, the findings show that working memory can now be accessed and captured in ways that hadn’t been possible before. This will help to delineate the commonalities in working memory formation and the possible differences, whether it’s remembering a password, a shopping list, or the score of your team’s big victory last night.
Reference:
[1] Unveiling the abstract format of mnemonic representations. Kwak Y, Curtis CE. Neuron. 2022, April 7; 110(1-7).
Links:
Working Memory (National Institute of Mental Health/NIH)
The Curtis Lab (New York University, New York)
NIH Support: National Eye Institute
Next Page