Skip to main content

imaging

Human Brain Compresses Working Memories into Low-Res ‘Summaries’

Posted on by

Stimulus images are disks of angled lines. A thought bubble shows similar angles in her thoughts
Credit: Adapted from Kwak Y., Neuron (2022)

You have probably done it already a few times today. Paused to remember a password, a shopping list, a phone number, or maybe the score to last night’s ballgame. The ability to store and recall needed information, called working memory, is essential for most of the human brain’s higher cognitive processes.

Researchers are still just beginning to piece together how working memory functions. But recently, NIH-funded researchers added an intriguing new piece to this neurobiological puzzle: how visual working memories are “formatted” and stored in the brain.

The findings, published in the journal Neuron, show that the visual cortex—the brain’s primary region for receiving, integrating, and processing visual information from the eye’s retina—acts more like a blackboard than a camera. That is, the visual cortex doesn’t photograph all the complex details of a visual image, such as the color of paper on which your password is written or the precise series of lines that make up the letters. Instead, it recodes visual information into something more like simple chalkboard sketches.

The discovery suggests that those pared down, low-res representations serve as a kind of abstract summary, capturing the relevant information while discarding features that aren’t relevant to the task at hand. It also shows that different visual inputs, such as spatial orientation and motion, may be stored in virtually identical, shared memory formats.

The new study, from Clayton Curtis and Yuna Kwak, New York University, New York, builds upon a known fundamental aspect of working memory. Many years ago, it was determined that the human brain tends to recode visual information. For instance, if passed a 10-digit phone number on a card, the visual information gets recoded and stored in the brain as the sounds of the numbers being read aloud.

Curtis and Kwak wanted to learn more about how the brain formats representations of working memory in patterns of brain activity. To find out, they measured brain activity with functional magnetic resonance imaging (fMRI) while participants used their visual working memory.

In each test, study participants were asked to remember a visual stimulus presented to them for 12 seconds and then make a memory-based judgment on what they’d just seen. In some trials, as shown in the image above, participants were shown a tilted grating, a series of black and white lines oriented at a particular angle. In others, they observed a cloud of dots, all moving in a direction to represent those same angles. After a short break, participants were asked to recall and precisely indicate the angle of the grating’s tilt or the dot cloud’s motion as accurately as possible.

It turned out that either visual stimulus—the grating or moving dots—resulted in the same patterns of neural activity in the visual cortex and parietal cortex. The parietal cortex is a part of the brain used in memory processing and storage.

These two distinct visual memories carrying the same relevant information seemed to have been recoded into a shared abstract memory format. As a result, the pattern of brain activity trained to recall motion direction was indistinguishable from that trained to recall the grating orientation.

This result indicated that only the task-relevant features of the visual stimuli had been extracted and recoded into a shared memory format. But Curtis and Kwak wondered whether there might be more to this finding.

To take a closer look, they used a sophisticated model that allowed them to project the three-dimensional patterns of brain activity into a more-informative, two-dimensional representation of visual space. And, indeed, their analysis of the data revealed a line-like pattern, similar to a chalkboard sketch that’s oriented at the relevant angles.

The findings suggest that participants weren’t actually remembering the grating or a complex cloud of moving dots at all. Instead, they’d compressed the images into a line representing the angle that they’d been asked to remember.

Many questions remain about how remembering a simple angle, a relatively straightforward memory formation, will translate to the more-complex sets of information stored in our working memory. On a technical level, though, the findings show that working memory can now be accessed and captured in ways that hadn’t been possible before. This will help to delineate the commonalities in working memory formation and the possible differences, whether it’s remembering a password, a shopping list, or the score of your team’s big victory last night.

Reference:

[1] Unveiling the abstract format of mnemonic representations. Kwak Y, Curtis CE. Neuron. 2022, April 7; 110(1-7).

Links:

Working Memory (National Institute of Mental Health/NIH)

The Curtis Lab (New York University, New York)

NIH Support: National Eye Institute


Capturing the Extracellular Matrix in 3D Color

Posted on by

Credit: Sarah Lipp, Purdue University, and Sarah Calve, University of Colorado, Boulder

For experienced and aspiring shutterbugs alike, sometimes the best photo in the bunch turns out to be a practice shot. That’s also occasionally true in the lab when imaging cells and tissues, and it’s the story behind this spectacular image showing the interface of skin and muscle during mammalian development.

Here you see an area of the mouse forelimb located near a bone called the humerus. This particular sample was labeled for laminin, a protein found in the extracellular matrix (ECM) that undergirds cells and tissues to give them mechanical and biochemical support. Computer algorithms were used to convert the original 2D confocal scan into a 3D image, and colorization was added to bring the different layers of tissue into sharper relief.

Skin tissue (bright red and yellow) is located near the top of the image; blood vessels (paler red, orange, and yellow) are in the middle and branching downward; and muscle (green, blue, and purple) makes up the bottom layer.

The image was created by Sarah Lipp, a graduate student in the NIH-supported tissue engineering lab of Sarah Calve. The team focuses on tissue interfaces to better understand the ECM and help devise strategies to engineer musculoskeletal tissues, such as tendon and cartilage.

In February 2020, Lipp was playing around with some new software tools for tissue imaging. Before zeroing in on her main target—the mouse’s myotendinous junction, where muscle transfers its force to tendon, Lipp snapped this practice shot of skin meeting muscle. After processing the practice shot with a color-projecting macro in an image processing tool called Fiji, she immediately liked what she saw.

So, Lipp tweaked the color a bit more and entered the image in the 2020 BioArt Scientific Image & Video Competition, sponsored by the Federation of American Societies for Experimental Biology, Bethesda, MD. Last December, the grad student received the good news that her practice shot had snagged one of the prestigious contest’s top awards.

But she’s not stopping there. Lipp is continuing to pursue her research interests at the University of Colorado, Boulder, where the Calve lab recently moved from Purdue University, West Lafayette, IN. Here’s wishing her a career filled with more great images—and great science!

Links:

Muscle and Bone Diseases (National Institute of Arthritis and Musculoskeletal and Skin Diseases/NIH)

Musculoskeletal Extracellular Matrix Laboratory (University of Colorado, Boulder)

BioArt Scientific Image & Video Competition (Federation of American Societies for Experimental Biology, Bethesda, MD)

NIH Support: National Institute of Arthritis and Musculoskeletal and Skin Diseases


Tapping Into The Brain’s Primary Motor Cortex

Posted on by

If you’re like me, you might catch yourself during the day in front of a computer screen mindlessly tapping your fingers. (I always check first to be sure my mute button is on!) But all that tapping isn’t as mindless as you might think.

While a research participant performs a simple motor task, tapping her fingers together, this video shows blood flow within the folds of her brain’s primary motor cortex (gray and white), which controls voluntary movement. Areas of high brain activity (yellow and red) emerge in the omega-shaped “hand-knob” region, the part of the brain controlling hand movement (right of center) and then further back within the primary somatic cortex (which borders the motor cortex toward the back of the head).

About 38 seconds in, the right half of the video screen illustrates that the finger tapping activates both superficial and deep layers of the primary motor cortex. In contrast, the sensation of a hand being brushed (a sensory task) mostly activates superficial layers, where the primary sensory cortex is located. This fits with what we know about the superficial and deep layers of the hand-knob region, since they are responsible for receiving sensory input and generating motor output to control finger movements, respectively [1].

The video showcases a new technology called zoomed 7T perfusion functional MRI (fMRI). It was an entry in the recent Show Us Your BRAINs! Photo and Video Contest, supported by NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative.

The technology is under development by an NIH-funded team led by Danny J.J. Wang, University of Southern California Mark and Mary Stevens Neuroimaging and Informatics Institute, Los Angeles. Zoomed 7T perfusion fMRI was developed by Xingfeng Shao and brought to life by the group’s medical animator Jim Stanis.

Measuring brain activity using fMRI to track perfusion is not new. The brain needs a lot of oxygen, carried to it by arteries running throughout the head, to carry out its many complex functions. Given the importance of oxygen to the brain, you can think of perfusion levels, measured by fMRI, as a stand-in measure for neural activity.

There are two things that are new about zoomed 7T perfusion fMRI. For one, it uses the first ultrahigh magnetic field imaging scanner approved by the Food and Drug Administration. The technology also has high sensitivity for detecting blood flow changes in tiny arteries and capillaries throughout the many layers of the cortex [2].

Compared to previous MRI methods with weaker magnets, the new technique can measure blood flow on a fine-grained scale, enabling scientists to remove unwanted signals (“noise”) such as those from surface-level arteries and veins. Getting an accurate read-out of activity from region to region across cortical layers can help scientists understand human brain function in greater detail in health and disease.

Having shown that the technology works as expected during relatively mundane hand movements, Wang and his team are now developing the approach for fine-grained 3D mapping of brain activity throughout the many layers of the brain. This type of analysis, known as mesoscale mapping, is key to understanding dynamic activities of neural circuits that connect brain cells across cortical layers and among brain regions.

Decoding circuits, and ultimately rewiring them, is a major goal of NIH’s BRAIN Initiative. Zoomed 7T perfusion fMRI gives us a window into 4D biology, which is the ability to watch 3D objects over time scales in which life happens, whether it’s playing an elaborate drum roll or just tapping your fingers.

References:

[1] Neuroanatomical localization of the ‘precentral knob’ with computed tomography imaging. Park MC, Goldman MA, Park MJ, Friehs GM. Stereotact Funct Neurosurg. 2007;85(4):158-61.

[2]. Laminar perfusion imaging with zoomed arterial spin labeling at 7 Tesla. Shao X, Guo F, Shou Q, Wang K, Jann K, Yan L, Toga AW, Zhang P, Wang D.J.J bioRxiv 2021.04.13.439689.

Links:

Brain Basics: Know Your Brain (National Institute of Neurological Disorders and Stroke)

Laboratory of Functional MRI Technology (University of Southern California Mark and Mary Stevens Neuroimaging and Informatics Institute)

The Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Show Us Your BRAINs! Photo and Video Contest (BRAIN Initiative)

NIH Support: National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering; Office of the Director


New Microscope Technique Provides Real-Time 3D Views

Posted on by

Most of the “cool” videos shared on my blog are borne of countless hours behind a microscope. Researchers must move a biological sample through a microscope’s focus, slowly acquiring hundreds of high-res 2D snapshots, one painstaking snap at a time. Afterwards, sophisticated computer software takes this ordered “stack” of images, calculates how the object would look from different perspectives, and later displays them as 3D views of life that can be streamed as short videos.

But this video is different. It was created by what’s called a multi-angle projection imaging system. This new optical device requires just a few camera snapshots and two mirrors to image a biological sample from multiple angles at once. Because the device eliminates the time-consuming process of acquiring individual image slices, it’s up to 100 times faster than current technologies and doesn’t require computer software to construct the movie. The kicker is that the video can be displayed in real time, which isn’t possible with existing image-stacking methods.

The video here shows two human melanoma cells, rotating several times between overhead and side views. You can see large amounts of the protein PI3K (brighter orange hues indicate higher concentrations), which helps some cancer cells divide and move around. Near the cell’s perimeter are small, dynamic surface protrusions. PI3K in these “blebs” is thought to help tumor cells navigate and survive in foreign tissues as the tumor spreads to other organs, a process known as metastasis.

The new multi-angle projection imaging system optical device was described in a paper published recently in the journal Nature Methods [1]. It was created by Reto Fiolka and Kevin Dean at the University of Texas Southwestern Medical Center, Dallas.

Like most technology, this device is complicated. Rather than the microscope and camera doing all the work, as is customary, two mirrors within the microscope play a starring role. During a camera exposure, these mirrors rotate ever so slightly and warp the acquired image in such a way that successive, unique perspectives of the sample magically come into view. By changing the amount of warp, the sample appears to rotate in real-time. As such, each view shown in the video requires only one camera snapshot, instead of acquiring hundreds of slices in a conventional scheme.

The concept traces to computer science and an algorithm called the shear warp transform method. It’s used to observe 3D objects from different perspectives on a 2D computer monitor. Fiolka, Dean, and team found they could implement a similar algorithm optically for use with a microscope. What’s more, their multi-angle projection imaging system is easy-to-use, inexpensive, and can be converted for use on any camera-based microscope.

The researchers have used the device to view samples spanning a range of sizes: from mitochondria and other tiny organelles inside cells to the beating heart of a young zebrafish. And, as the video shows, it has been applied to study cancer and other human diseases.

In a neat, but also scientifically valuable twist, the new optical method can generate a virtual reality view of a sample. Any microscope user wearing the appropriately colored 3D glasses immediately sees the objects.

While virtual reality viewing of cellular life might sound like a gimmick, Fiolka and Dean believe that it will help researchers use their current microscopes to see any sample in 3D—offering the chance to find rare and potentially important biological events much faster than is possible with even the most advanced microscopes today.

Fiolka, Dean, and team are still just getting started. Because the method analyzes tissue very quickly within a single image frame, they say it will enable scientists to observe the fastest events in biology, such as the movement of calcium throughout a neuron—or even a whole bundle of neurons at once. For neuroscientists trying to understand the brain, that’s a movie they will really want to see.

Reference:

[1] Real-time multi-angle projection imaging of biological dynamics. Chang BJ, Manton JD, Sapoznik E, Pohlkamp T, Terrones TS, Welf ES, Murali VS, Roudot P, Hake K, Whitehead L, York AG, Dean KM, Fiolka R. Nat Methods. 2021 Jul;18(7):829-834.

Links:

Metastatic Cancer: When Cancer Spreads (National Cancer Institute)

Fiolka Lab (University of Texas Southwestern Medical Center, Dallas)

Dean Lab (University of Texas Southwestern)

Microscopy Innovation Lab (University of Texas Southwestern)

NIH Support: National Cancer Institute; National Institute of General Medical Sciences


The Amazing Brain: Motor Neurons of the Cervical Spine

Posted on by

Today, you may have opened a jar, done an upper body workout, played a guitar or a piano, texted a friend, or maybe even jotted down a grocery list longhand. All of these “skilled” arm, wrist, and hand movements are made possible by the bundled nerves, or circuits, running through a part of the central nervous system in the neck area called the cervical spine.

This video, which combines sophisticated imaging and computation with animation, shows the density of three types of nerve cells in the mouse cervical spine. There are the V1 interneurons (red), which sit between sensory and motor neurons; motor neurons associated with controlling the movement of the bicep (blue); and motor neurons associated with controlling the tricep (green).

At 4 seconds, the 3D animation morphs to show all the colors and cells intermixed as they are naturally in the cervical spine. At 8 seconds, the animation highlights the density of these three cells types. Notice in the bottom left corner, a light icon appears indicating the different imaging perspectives. What’s unique here is the frontal, or rostral, view of the cervical spine. The cervical spine is typically imaged from a lateral, or side, perspective.

Starting at 16 seconds, the animation highlights the location and density of each of the individual neurons. For the grand finale, viewers zoom off on a brief fly-through of the cervical spine and a flurry of reds, blues, and greens.

The video comes from Jamie Anne Mortel, a research assistant in the lab of Samuel Pfaff, Salk Institute, La Jolla, CA. Mortel is part of a team supported by the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative that’s developing a comprehensive atlas of the circuitry within the cervical spine that controls how mice control their forelimb movements, such as reaching and grasping.

This basic research will provide a better understanding of how the mammalian brain and spinal cord work together to produce movement. More than that, this research may provide valuable clues into better treating paralysis to arms, wrists, and/or hands caused by neurological diseases and spinal cord injuries.

As a part of this project, the Pfaff lab has been busy developing a software tool to take their imaging data from different parts of the cervical spine and present it in 3D. Mortel, who likes to make cute cartoon animations in her spare time, noticed that the software lacked animation capability. So she took the initiative and spent the next three weeks working after hours to produce this video—her first attempt at scientific animation. No doubt she must have been using a lot of wrist and hand movements!

With a positive response from her Salk labmates, Mortel decided to enter her scientific animation debut in the 2021 Show Us BRAINs! Photo and Video Contest. To her great surprise and delight, Mortel won third place in the video competition. Congratulations, and continued success for you and the team in producing this much-needed atlas to define the circuitry underlying skilled arm, wrist, and hand movements.

Links:

Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Spinal Cord Injury Information Page (National Institute of Neurological Disorders and Stroke/NIH)

Samuel Pfaff (Salk Institute, La Jolla, CA)

Show Us Your BRAINs! Photo and Video Contest (Brain Initiative/NIH)

NIH Support: National Institute of Neurological Disorders and Stroke


Next Page