Posted on by Lawrence Tabak, D.D.S., Ph.D.
You have probably done it already a few times today. Paused to remember a password, a shopping list, a phone number, or maybe the score to last night’s ballgame. The ability to store and recall needed information, called working memory, is essential for most of the human brain’s higher cognitive processes.
Researchers are still just beginning to piece together how working memory functions. But recently, NIH-funded researchers added an intriguing new piece to this neurobiological puzzle: how visual working memories are “formatted” and stored in the brain.
The findings, published in the journal Neuron, show that the visual cortex—the brain’s primary region for receiving, integrating, and processing visual information from the eye’s retina—acts more like a blackboard than a camera. That is, the visual cortex doesn’t photograph all the complex details of a visual image, such as the color of paper on which your password is written or the precise series of lines that make up the letters. Instead, it recodes visual information into something more like simple chalkboard sketches.
The discovery suggests that those pared down, low-res representations serve as a kind of abstract summary, capturing the relevant information while discarding features that aren’t relevant to the task at hand. It also shows that different visual inputs, such as spatial orientation and motion, may be stored in virtually identical, shared memory formats.
The new study, from Clayton Curtis and Yuna Kwak, New York University, New York, builds upon a known fundamental aspect of working memory. Many years ago, it was determined that the human brain tends to recode visual information. For instance, if passed a 10-digit phone number on a card, the visual information gets recoded and stored in the brain as the sounds of the numbers being read aloud.
Curtis and Kwak wanted to learn more about how the brain formats representations of working memory in patterns of brain activity. To find out, they measured brain activity with functional magnetic resonance imaging (fMRI) while participants used their visual working memory.
In each test, study participants were asked to remember a visual stimulus presented to them for 12 seconds and then make a memory-based judgment on what they’d just seen. In some trials, as shown in the image above, participants were shown a tilted grating, a series of black and white lines oriented at a particular angle. In others, they observed a cloud of dots, all moving in a direction to represent those same angles. After a short break, participants were asked to recall and precisely indicate the angle of the grating’s tilt or the dot cloud’s motion as accurately as possible.
It turned out that either visual stimulus—the grating or moving dots—resulted in the same patterns of neural activity in the visual cortex and parietal cortex. The parietal cortex is a part of the brain used in memory processing and storage.
These two distinct visual memories carrying the same relevant information seemed to have been recoded into a shared abstract memory format. As a result, the pattern of brain activity trained to recall motion direction was indistinguishable from that trained to recall the grating orientation.
This result indicated that only the task-relevant features of the visual stimuli had been extracted and recoded into a shared memory format. But Curtis and Kwak wondered whether there might be more to this finding.
To take a closer look, they used a sophisticated model that allowed them to project the three-dimensional patterns of brain activity into a more-informative, two-dimensional representation of visual space. And, indeed, their analysis of the data revealed a line-like pattern, similar to a chalkboard sketch that’s oriented at the relevant angles.
The findings suggest that participants weren’t actually remembering the grating or a complex cloud of moving dots at all. Instead, they’d compressed the images into a line representing the angle that they’d been asked to remember.
Many questions remain about how remembering a simple angle, a relatively straightforward memory formation, will translate to the more-complex sets of information stored in our working memory. On a technical level, though, the findings show that working memory can now be accessed and captured in ways that hadn’t been possible before. This will help to delineate the commonalities in working memory formation and the possible differences, whether it’s remembering a password, a shopping list, or the score of your team’s big victory last night.
 Unveiling the abstract format of mnemonic representations. Kwak Y, Curtis CE. Neuron. 2022, April 7; 110(1-7).
Working Memory (National Institute of Mental Health/NIH)
The Curtis Lab (New York University, New York)
NIH Support: National Eye Institute
Posted on by Dr. Francis Collins
If you’re like me, you might catch yourself during the day in front of a computer screen mindlessly tapping your fingers. (I always check first to be sure my mute button is on!) But all that tapping isn’t as mindless as you might think.
While a research participant performs a simple motor task, tapping her fingers together, this video shows blood flow within the folds of her brain’s primary motor cortex (gray and white), which controls voluntary movement. Areas of high brain activity (yellow and red) emerge in the omega-shaped “hand-knob” region, the part of the brain controlling hand movement (right of center) and then further back within the primary somatic cortex (which borders the motor cortex toward the back of the head).
About 38 seconds in, the right half of the video screen illustrates that the finger tapping activates both superficial and deep layers of the primary motor cortex. In contrast, the sensation of a hand being brushed (a sensory task) mostly activates superficial layers, where the primary sensory cortex is located. This fits with what we know about the superficial and deep layers of the hand-knob region, since they are responsible for receiving sensory input and generating motor output to control finger movements, respectively .
The video showcases a new technology called zoomed 7T perfusion functional MRI (fMRI). It was an entry in the recent Show Us Your BRAINs! Photo and Video Contest, supported by NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative.
The technology is under development by an NIH-funded team led by Danny J.J. Wang, University of Southern California Mark and Mary Stevens Neuroimaging and Informatics Institute, Los Angeles. Zoomed 7T perfusion fMRI was developed by Xingfeng Shao and brought to life by the group’s medical animator Jim Stanis.
Measuring brain activity using fMRI to track perfusion is not new. The brain needs a lot of oxygen, carried to it by arteries running throughout the head, to carry out its many complex functions. Given the importance of oxygen to the brain, you can think of perfusion levels, measured by fMRI, as a stand-in measure for neural activity.
There are two things that are new about zoomed 7T perfusion fMRI. For one, it uses the first ultrahigh magnetic field imaging scanner approved by the Food and Drug Administration. The technology also has high sensitivity for detecting blood flow changes in tiny arteries and capillaries throughout the many layers of the cortex .
Compared to previous MRI methods with weaker magnets, the new technique can measure blood flow on a fine-grained scale, enabling scientists to remove unwanted signals (“noise”) such as those from surface-level arteries and veins. Getting an accurate read-out of activity from region to region across cortical layers can help scientists understand human brain function in greater detail in health and disease.
Having shown that the technology works as expected during relatively mundane hand movements, Wang and his team are now developing the approach for fine-grained 3D mapping of brain activity throughout the many layers of the brain. This type of analysis, known as mesoscale mapping, is key to understanding dynamic activities of neural circuits that connect brain cells across cortical layers and among brain regions.
Decoding circuits, and ultimately rewiring them, is a major goal of NIH’s BRAIN Initiative. Zoomed 7T perfusion fMRI gives us a window into 4D biology, which is the ability to watch 3D objects over time scales in which life happens, whether it’s playing an elaborate drum roll or just tapping your fingers.
 Neuroanatomical localization of the ‘precentral knob’ with computed tomography imaging. Park MC, Goldman MA, Park MJ, Friehs GM. Stereotact Funct Neurosurg. 2007;85(4):158-61.
. Laminar perfusion imaging with zoomed arterial spin labeling at 7 Tesla. Shao X, Guo F, Shou Q, Wang K, Jann K, Yan L, Toga AW, Zhang P, Wang D.J.J bioRxiv 2021.04.13.439689.
Brain Basics: Know Your Brain (National Institute of Neurological Disorders and Stroke)
Laboratory of Functional MRI Technology (University of Southern California Mark and Mary Stevens Neuroimaging and Informatics Institute)
Show Us Your BRAINs! Photo and Video Contest (BRAIN Initiative)
NIH Support: National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering; Office of the Director
Posted on by Dr. Francis Collins
Today, you may have opened a jar, done an upper body workout, played a guitar or a piano, texted a friend, or maybe even jotted down a grocery list longhand. All of these “skilled” arm, wrist, and hand movements are made possible by the bundled nerves, or circuits, running through a part of the central nervous system in the neck area called the cervical spine.
This video, which combines sophisticated imaging and computation with animation, shows the density of three types of nerve cells in the mouse cervical spine. There are the V1 interneurons (red), which sit between sensory and motor neurons; motor neurons associated with controlling the movement of the bicep (blue); and motor neurons associated with controlling the tricep (green).
At 4 seconds, the 3D animation morphs to show all the colors and cells intermixed as they are naturally in the cervical spine. At 8 seconds, the animation highlights the density of these three cells types. Notice in the bottom left corner, a light icon appears indicating the different imaging perspectives. What’s unique here is the frontal, or rostral, view of the cervical spine. The cervical spine is typically imaged from a lateral, or side, perspective.
Starting at 16 seconds, the animation highlights the location and density of each of the individual neurons. For the grand finale, viewers zoom off on a brief fly-through of the cervical spine and a flurry of reds, blues, and greens.
The video comes from Jamie Anne Mortel, a research assistant in the lab of Samuel Pfaff, Salk Institute, La Jolla, CA. Mortel is part of a team supported by the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative that’s developing a comprehensive atlas of the circuitry within the cervical spine that controls how mice control their forelimb movements, such as reaching and grasping.
This basic research will provide a better understanding of how the mammalian brain and spinal cord work together to produce movement. More than that, this research may provide valuable clues into better treating paralysis to arms, wrists, and/or hands caused by neurological diseases and spinal cord injuries.
As a part of this project, the Pfaff lab has been busy developing a software tool to take their imaging data from different parts of the cervical spine and present it in 3D. Mortel, who likes to make cute cartoon animations in her spare time, noticed that the software lacked animation capability. So she took the initiative and spent the next three weeks working after hours to produce this video—her first attempt at scientific animation. No doubt she must have been using a lot of wrist and hand movements!
With a positive response from her Salk labmates, Mortel decided to enter her scientific animation debut in the 2021 Show Us BRAINs! Photo and Video Contest. To her great surprise and delight, Mortel won third place in the video competition. Congratulations, and continued success for you and the team in producing this much-needed atlas to define the circuitry underlying skilled arm, wrist, and hand movements.
Spinal Cord Injury Information Page (National Institute of Neurological Disorders and Stroke/NIH)
Samuel Pfaff (Salk Institute, La Jolla, CA)
Show Us Your BRAINs! Photo and Video Contest (Brain Initiative/NIH)
NIH Support: National Institute of Neurological Disorders and Stroke
Posted on by Dr. Francis Collins
In days mostly gone by, it was fashionable in some circles for people to hand out calling cards to mark their arrival at special social events. This genteel human tradition is now being adapted to the lab to allow certain benign viruses to issue their own high-tech calling cards and mark their arrival at precise locations in the genome. These special locations show where there’s activity involving transcription factors, specialized proteins that switch genes on and off and help determine cell fate.
The idea is that myriad, well-placed calling cards can track brain development over time in mice and detect changes in transcription factor activity associated with certain neuropsychiatric disorders. This colorful image, which won first place in this year’s Show Us Your BRAINs! Photo and Video contest, provides a striking display of these calling cards in action in living brain tissue.
The image comes from Allen Yen, a PhD candidate in the lab of Joseph Dougherty, collaborating with the nearby lab of Rob Mitra. Both labs are located in the Washington University School of Medicine, St. Louis.
Yen and colleagues zoomed in on this section of mouse brain tissue under a microscope to capture dozens of detailed images that they then stitched together to create this high-resolution overview. The image shows neural cells (red) and cell nuclei (blue). But focus in on the neural cells (green) concentrated in the brain’s outer cortex (top) and hippocampus (two lobes in the upper center). They’ve been labelled with calling cards that were dropped off by adeno-associated virus .
Once dropped off, a calling card doesn’t bear a pretentious name or title. Rather, the calling card, is a small mobile snippet of DNA called a transposon. It gets dropped off with the other essential component of the technology: a specialized enzyme called a transposase, which the researchers fuse to one of many specific transcription factors of interest.
Each time one of these transcription factors of interest binds DNA to help turn a gene on or off, the attached transposase “grabs” a transposon calling card and inserts it into the genome. As a result, it leaves behind a permanent record of the interaction.
What’s also nice is the calling cards are programmed to give away their general locations. That’s because they encode a fluorescent marker (in this image, it’s a green fluorescent protein). In fact, Yen and colleagues could look under a microscope and tell from all the green that their calling card technology was in place and working as intended.
The final step, though, was to find out precisely where in the genome those calling cards had been left. For this, the researchers used next-generation sequencing to produce a cumulative history and map of each and every calling card dropped off in the genome.
These comprehensive maps allow them to identify important DNA-protein binding events well after the fact. This innovative technology also enables scientists to attribute past molecular interactions with observable developmental outcomes in a way that isn’t otherwise possible.
While the Mitra and Dougherty labs continue to improve upon this technology, it’s already readily adaptable to answer many important questions about the brain and brain disorders. In fact, Yen is now applying the technology to study neurodevelopment in mouse models of neuropsychiatric disorders, specifically autism spectrum disorder (ASD) . This calling card technology also is available for any lab to deploy for studying a transcription factor of interest.
This research is supported by the Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative. One of the major goals of BRAIN Initiative is to accelerate the development and application of innovative technologies to gain new understanding of the brain. This award-winning image is certainly a prime example of striving to meet this goal. I’ll look forward to what these calling cards will tell us in the future about ASD and other important neurodevelopmental conditions affecting the brain.
 A viral toolkit for recording transcription factor-DNA interactions in live mouse tissues. Cammack AJ, Moudgil A, Chen J, Vasek MJ, Shabsovich M, McCullough K, Yen A, Lagunas T, Maloney SE, He J, Chen X, Hooda M, Wilkinson MN, Miller TM, Mitra RD, Dougherty JD. Proc Natl Acad Sci U S A. 2020 May 5;117(18):10003-10014.
 A MYT1L Syndrome mouse model recapitulates patient phenotypes and reveals altered brain development due to disrupted neuronal maturation. Jiayang Chen, Mary E. Lambo, Xia Ge, Joshua T. Dearborn, Yating Liu, Katherine B. McCullough, Raylynn G. Swift, Dora R. Tabachnick, Lucy Tian, Kevin Noguchi, Joel R. Garbow, John N. Constantino. bioRxiv. May 27, 2021.
Autism Spectrum Disorder (National Institute of Mental Health/NIH)
Dougherty Lab (Washington University School of Medicine, St. Louis)
Mitra Lab (Washington University School of Medicine)
Show Us Your BRAINs! Photo and Video Contest (BRAIN Initiative/NIH)
NIH Support: National Institute of Neurological Disorders and Stroke; National Institute of Mental Health; National Center for Advancing Translational Sciences; National Human Genome Research Institute; National Institute of General Medical Sciences
Posted on by Dr. Francis Collins
Flip the image above upside down, and the shape may remind you of something. If you think it resembles a pyramid, then you and a lot of great neuroscientists are thinking alike. What you are viewing is a colorized, 3D reconstruction of a pyramidal tract, which are bundles of nerve fibers that originate from the brain’s cerebral cortex and relay signals to the brainstem or the spinal cord. These signals control many important activities, including the voluntary movement of our arms, legs, head, and face.
For a while now, it’s been possible to combine a specialized form of magnetic resonance imaging (MRI) with computer modeling tools to produce 3D reconstructions of complicated networks of nerve fibers, such as the pyramidal tract. Still, for technical reasons, the quality of these reconstructions has remained poor in parts of the brain where nerve fibers cross at angles of 40 degrees or less.
The video above demonstrates how adding a sophisticated algorithm, called Orientation Distribution Function (ODF)-Fingerprinting, to such modeling can help overcome this problem when reconstructing a pyramidal tract. It has potential to enhance the reliability of these 3D reconstructions as neurosurgeons begin to use them to plan out their surgeries to help ensure they are carried out with the utmost safety and precision.
In the first second of the video, you see gray, fuzzy images from a diffusion MRI of the pyramidal tract. But, very quickly, a more colorful, detailed 3D reconstruction begins to appear, swiftly filling in from the top down. Colors are used to indicate the primary orientations of the nerve fibers: left to right (red), back to front (green), and top to bottom (blue). The orange, magenta, and other colors represent combinations of these primary directional orientations.
About three seconds into the video, a rough draft of the 3D reconstruction is complete. The top of the pyramidal tract looks pretty good. However, looking lower down, you can see distortions in color and relatively poor resolution of the nerve fibers in the middle of the tract—exactly where the fibers cross each other at angles of less than 40 degrees. So, researchers tapped into the power of their new ODF-Fingerprinting software to improve the image—and, starting about nine seconds into the video, you can see an impressive final result.
The researchers who produced this amazing video are Patryk Filipiak and colleagues in the NIH-supported lab of Steven Baete, Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York. The work paired diffusion MRI data from the NIH Human Connectome Project with the ODF-Fingerprinting algorithm, which was created by Baete to incorporate additional MRI imaging data on the shape of nerve fibers to infer their directionality .
This innovative approach to imaging recently earned Baete’s team second place in the 2021 “Show Us Your BRAINs” Photo and Video contest, sponsored by the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative. But researchers aren’t stopping there! They are continuing to refine ODF-Fingerprinting, with the aim of modeling the pyramidal tract in even higher resolution for use in devising new and better ways of helping people undergoing neurosurgery.
 Fingerprinting Orientation Distribution Functions in diffusion MRI detects smaller crossing angles. Baete SH, Cloos MA, Lin YC, Placantonakis DG, Shepherd T, Boada FE. Neuroimage. 2019 Sep;198:231-241.
Human Connectome Project (University of Southern California, Los Angeles)
Steven Baete (Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York)
Show Us Your BRAINs! Photo and Video Contest (BRAIN Initiative/NIH)
NIH Support: National Institute of Biomedical Imaging and Bioengineering; National Institute of Neurological Disorders and Stroke; National Cancer Institute