Skip to main content

brain

Human Brain Compresses Working Memories into Low-Res ‘Summaries’

Posted on by

Stimulus images are disks of angled lines. A thought bubble shows similar angles in her thoughts
Credit: Adapted from Kwak Y., Neuron (2022)

You have probably done it already a few times today. Paused to remember a password, a shopping list, a phone number, or maybe the score to last night’s ballgame. The ability to store and recall needed information, called working memory, is essential for most of the human brain’s higher cognitive processes.

Researchers are still just beginning to piece together how working memory functions. But recently, NIH-funded researchers added an intriguing new piece to this neurobiological puzzle: how visual working memories are “formatted” and stored in the brain.

The findings, published in the journal Neuron, show that the visual cortex—the brain’s primary region for receiving, integrating, and processing visual information from the eye’s retina—acts more like a blackboard than a camera. That is, the visual cortex doesn’t photograph all the complex details of a visual image, such as the color of paper on which your password is written or the precise series of lines that make up the letters. Instead, it recodes visual information into something more like simple chalkboard sketches.

The discovery suggests that those pared down, low-res representations serve as a kind of abstract summary, capturing the relevant information while discarding features that aren’t relevant to the task at hand. It also shows that different visual inputs, such as spatial orientation and motion, may be stored in virtually identical, shared memory formats.

The new study, from Clayton Curtis and Yuna Kwak, New York University, New York, builds upon a known fundamental aspect of working memory. Many years ago, it was determined that the human brain tends to recode visual information. For instance, if passed a 10-digit phone number on a card, the visual information gets recoded and stored in the brain as the sounds of the numbers being read aloud.

Curtis and Kwak wanted to learn more about how the brain formats representations of working memory in patterns of brain activity. To find out, they measured brain activity with functional magnetic resonance imaging (fMRI) while participants used their visual working memory.

In each test, study participants were asked to remember a visual stimulus presented to them for 12 seconds and then make a memory-based judgment on what they’d just seen. In some trials, as shown in the image above, participants were shown a tilted grating, a series of black and white lines oriented at a particular angle. In others, they observed a cloud of dots, all moving in a direction to represent those same angles. After a short break, participants were asked to recall and precisely indicate the angle of the grating’s tilt or the dot cloud’s motion as accurately as possible.

It turned out that either visual stimulus—the grating or moving dots—resulted in the same patterns of neural activity in the visual cortex and parietal cortex. The parietal cortex is a part of the brain used in memory processing and storage.

These two distinct visual memories carrying the same relevant information seemed to have been recoded into a shared abstract memory format. As a result, the pattern of brain activity trained to recall motion direction was indistinguishable from that trained to recall the grating orientation.

This result indicated that only the task-relevant features of the visual stimuli had been extracted and recoded into a shared memory format. But Curtis and Kwak wondered whether there might be more to this finding.

To take a closer look, they used a sophisticated model that allowed them to project the three-dimensional patterns of brain activity into a more-informative, two-dimensional representation of visual space. And, indeed, their analysis of the data revealed a line-like pattern, similar to a chalkboard sketch that’s oriented at the relevant angles.

The findings suggest that participants weren’t actually remembering the grating or a complex cloud of moving dots at all. Instead, they’d compressed the images into a line representing the angle that they’d been asked to remember.

Many questions remain about how remembering a simple angle, a relatively straightforward memory formation, will translate to the more-complex sets of information stored in our working memory. On a technical level, though, the findings show that working memory can now be accessed and captured in ways that hadn’t been possible before. This will help to delineate the commonalities in working memory formation and the possible differences, whether it’s remembering a password, a shopping list, or the score of your team’s big victory last night.

Reference:

[1] Unveiling the abstract format of mnemonic representations. Kwak Y, Curtis CE. Neuron. 2022, April 7; 110(1-7).

Links:

Working Memory (National Institute of Mental Health/NIH)

The Curtis Lab (New York University, New York)

NIH Support: National Eye Institute


Artificial Intelligence Getting Smarter! Innovations from the Vision Field

Posted on by

AI. Photograph of retina

One of many health risks premature infants face is retinopathy of prematurity (ROP), a leading cause of childhood blindness worldwide. ROP causes abnormal blood vessel growth in the light-sensing eye tissue called the retina. Left untreated, ROP can lead to lead to scarring, retinal detachment, and blindness. It’s the disease that caused singer and songwriter Stevie Wonder to lose his vision.

Now, effective treatments are available—if the disease is diagnosed early and accurately. Advancements in neonatal care have led to the survival of extremely premature infants, who are at highest risk for severe ROP. Despite major advancements in diagnosis and treatment, tragically, about 600 infants in the U.S. still go blind each year from ROP. This disease is difficult to diagnose and manage, even for the most experienced ophthalmologists. And the challenges are much worse in remote corners of the world that have limited access to ophthalmic and neonatal care.

Caption: Image of a neonatal retina prior to AI processing. Left: Image of a premature infant retina showing signs of severe ROP with large, twisted blood vessels; Right: Normal neonatal retina by comparison. Credit: Casey Eye Institute, Oregon Health and Science University, Portland, and National Eye Institute, NIH

Artificial intelligence (AI) is helping bridge these gaps. Prior to my tenure as National Eye Institute (NEI) director, I helped develop a system called i-ROP Deep Learning (i-ROP DL), which automates the identification of ROP. In essence, we trained a computer to identify subtle abnormalities in retinal blood vessels from thousands of images of premature infant retinas. Strikingly, the i-ROP DL artificial intelligence system outperformed even international ROP experts [1]. This has enormous potential to improve the quality and delivery of eye care to premature infants worldwide.

Of course, the promise of medical artificial intelligence extends far beyond ROP. In 2018, the FDA approved the first autonomous AI-based diagnostic tool in any field of medicine [2]. Called IDx-DR, the system streamlines screening for diabetic retinopathy (DR), and its results require no interpretation by a doctor. DR occurs when blood vessels in the retina grow irregularly, bleed, and potentially cause blindness. About 34 million people in the U.S. have diabetes, and each is at risk for DR.

As with ROP, early diagnosis and intervention is crucial to preventing vision loss to DR. The American Diabetes Association recommends people with diabetes see an eye care provider annually to have their retinas examined for signs of DR. Yet fewer than 50 percent of Americans with diabetes receive these annual eye exams.

The IDx-DR system was conceived by Michael Abramoff, an ophthalmologist and AI expert at the University of Iowa, Iowa City. With NEI funding, Abramoff used deep learning to design a system for use in a primary-care medical setting. A technician with minimal ophthalmology training can use the IDx-DR system to scan a patient’s retinas and get results indicating whether a patient should be sent to an eye specialist for follow-up evaluation or to return for another scan in 12 months.

Caption: The IDx-DR is the first FDA-approved system for diagnostic screening of diabetic retinopathy. It’s designed to be used in a primary care setting. Results determine whether a patient needs immediate follow-up. Credit: Digital Diagnostics, Coralville, IA.

Many other methodological innovations in AI have occurred in ophthalmology. That’s because imaging is so crucial to disease diagnosis and clinical outcome data are so readily available. As a result, AI-based diagnostic systems are in development for many other eye diseases, including cataract, age-related macular degeneration (AMD), and glaucoma.

Rapid advances in AI are occurring in other medical fields, such as radiology, cardiology, and dermatology. But disease diagnosis is just one of many applications for AI. Neurobiologists are using AI to answer questions about retinal and brain circuitry, disease modeling, microsurgical devices, and drug discovery.

If it sounds too good to be true, it may be. There’s a lot of work that remains to be done. Significant challenges to AI utilization in science and medicine persist. For example, researchers from the University of Washington, Seattle, last year tested seven AI-based screening algorithms that were designed to detect DR. They found under real-world conditions that only one outperformed human screeners [3]. A key problem is these AI algorithms need to be trained with more diverse images and data, including a wider range of races, ethnicities, and populations—as well as different types of cameras.

How do we address these gaps in knowledge? We’ll need larger datasets, a collaborative culture of sharing data and software libraries, broader validation studies, and algorithms to address health inequities and to avoid bias. The NIH Common Fund’s Bridge to Artificial Intelligence (Bridge2AI) project and NIH’s Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Program project will be major steps toward addressing those gaps.

So, yes—AI is getting smarter. But harnessing its full power will rely on scientists and clinicians getting smarter, too.

References:

[1] Automated diagnosis of plus disease in retinopathy of prematurity using deep convolutional neural networks. Brown JM, Campbell JP, Beers A, Chang K, Ostmo S, Chan RVP, Dy J, Erdogmus D, Ioannidis S, Kalpathy-Cramer J, Chiang MF; Imaging and Informatics in Retinopathy of Prematurity (i-ROP) Research Consortium. JAMA Ophthalmol. 2018 Jul 1;136(7):803-810.

[2] FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. Food and Drug Administration. April 11, 2018.

[3] Multicenter, head-to-head, real-world validation study of seven automated artificial intelligence diabetic retinopathy screening systems. Lee AY, Yanagihara RT, Lee CS, Blazes M, Jung HC, Chee YE, Gencarella MD, Gee H, Maa AY, Cockerham GC, Lynch M, Boyko EJ. Diabetes Care. 2021 May;44(5):1168-1175.

Links:

Retinopathy of Prematurity (National Eye Institute/NIH)

Diabetic Eye Disease (NEI)

NEI Research News

Michael Abramoff (University of Iowa, Iowa City)

Bridge to Artificial Intelligence (Common Fund/NIH)

Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Program (NIH)

[Note: Acting NIH Director Lawrence Tabak has asked the heads of NIH’s institutes and centers to contribute occasional guest posts to the blog as a way to highlight some of the cool science that they support and conduct. This is the second in the series of NIH institute and center guest posts that will run until a new permanent NIH director is in place.]


Groundbreaking Study Maps Key Brain Circuit

Posted on by

Biologists have long wondered how neurons from different regions of the brain actually interconnect into integrated neural networks, or circuits. A classic example is a complex master circuit projecting across several regions of the vertebrate brain called the basal ganglia. It’s involved in many fundamental brain processes, such as controlling movement, thought, and emotion.

In a paper published recently in the journal Nature, an NIH-supported team working in mice has created a wiring diagram, or connectivity map, of a key component of this master circuit that controls voluntary movement. This groundbreaking map will guide the way for future studies of the basal ganglia’s direct connections with the thalamus, which is a hub for information going to and from the spinal cord, as well as its links to the motor cortex in the front of the brain, which controls voluntary movements.

This 3D animation drawn from the paper’s findings captures the biological beauty of these intricate connections. It starts out zooming around four of the six horizontal layers of the motor cortex. At about 6 seconds in, the video focuses on nerve cell projections from the thalamus (blue) connecting to cortex nerve cells that provide input to the basal ganglia (green). It also shows connections to the cortex nerve cells that input to the thalamus (red).

At about 25 seconds, the video scans back to provide a quick close-up of the cell bodies (green and red bulges). It then zooms out to show the broader distribution of nerve cells within the cortex layers and the branched fringes of corticothalamic nerve cells (red) at the top edge of the cortex.

The video comes from scientific animator Jim Stanis, University of Southern California Mark and Mary Stevens Neuroimaging and Informatics Institute, Los Angeles. He collaborated with Nick Foster, lead author on the Nature paper and a research scientist in the NIH-supported lab of Hong-Wei Dong at the University of California, Los Angeles.

The two worked together to bring to life hundreds of microscopic images of this circuit, known by the unusually long, hyphenated name: the cortico-basal ganglia-thalamic loop. It consists of a series of subcircuits that feed into a larger signaling loop.

The subcircuits in the loop make it possible to connect thinking with movement, helping the brain learn useful sequences of motor activity. The looped subcircuits also allow the brain to perform very complex tasks such as achieving goals (completing a marathon) and adapting to changing circumstances (running uphill or downhill).

Although scientists had long assumed the cortico-basal ganglia-thalamic loop existed and formed a tight, closed loop, they had no real proof. This new research, funded through NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, provides that proof showing anatomically that the nerve cells physically connect, as highlighted in this video. The research also provides electrical proof through tests that show stimulating individual segments activate the others.

Detailed maps of neural circuits are in high demand. That’s what makes results like these so exciting to see. Researchers can now better navigate this key circuit not only in mice but other vertebrates, including humans. Indeed, the cortico-basal ganglia-thalamic loop may be involved in a number of neurological and neuropsychiatric conditions, including Huntington’s disease, Parkinson’s disease, schizophrenia, and addiction. In the meantime, Stanis, Foster, and colleagues have left us with a very cool video to watch.

Reference:

[1] The mouse cortico-basal ganglia-thalamic network. Foster NN, Barry J, Korobkova L, Garcia L, Gao L, Becerra M, Sherafat Y, Peng B, Li X, Choi JH, Gou L, Zingg B, Azam S, Lo D, Khanjani N, Zhang B, Stanis J, Bowman I, Cotter K, Cao C, Yamashita S, Tugangui A, Li A, Jiang T, Jia X, Feng Z, Aquino S, Mun HS, Zhu M, Santarelli A, Benavidez NL, Song M, Dan G, Fayzullina M, Ustrell S, Boesen T, Johnson DL, Xu H, Bienkowski MS, Yang XW, Gong H, Levine MS, Wickersham I, Luo Q, Hahn JD, Lim BK, Zhang LI, Cepeda C, Hintiryan H, Dong HW. Nature. 2021;598(7879):188-194.

Links:

Brain Basics: Know Your Brain (National Institute of Neurological Disorders and Stroke/NIH)

Dong Lab (University of California, Los Angeles)

Mark and Mary Stevens Neuroimaging and Informatics Institute (University of Southern California, Los Angeles)

The Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: Eunice Kennedy Shriver National Institute of Child Health and Human Development; National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


Tapping Into The Brain’s Primary Motor Cortex

Posted on by

If you’re like me, you might catch yourself during the day in front of a computer screen mindlessly tapping your fingers. (I always check first to be sure my mute button is on!) But all that tapping isn’t as mindless as you might think.

While a research participant performs a simple motor task, tapping her fingers together, this video shows blood flow within the folds of her brain’s primary motor cortex (gray and white), which controls voluntary movement. Areas of high brain activity (yellow and red) emerge in the omega-shaped “hand-knob” region, the part of the brain controlling hand movement (right of center) and then further back within the primary somatic cortex (which borders the motor cortex toward the back of the head).

About 38 seconds in, the right half of the video screen illustrates that the finger tapping activates both superficial and deep layers of the primary motor cortex. In contrast, the sensation of a hand being brushed (a sensory task) mostly activates superficial layers, where the primary sensory cortex is located. This fits with what we know about the superficial and deep layers of the hand-knob region, since they are responsible for receiving sensory input and generating motor output to control finger movements, respectively [1].

The video showcases a new technology called zoomed 7T perfusion functional MRI (fMRI). It was an entry in the recent Show Us Your BRAINs! Photo and Video Contest, supported by NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative.

The technology is under development by an NIH-funded team led by Danny J.J. Wang, University of Southern California Mark and Mary Stevens Neuroimaging and Informatics Institute, Los Angeles. Zoomed 7T perfusion fMRI was developed by Xingfeng Shao and brought to life by the group’s medical animator Jim Stanis.

Measuring brain activity using fMRI to track perfusion is not new. The brain needs a lot of oxygen, carried to it by arteries running throughout the head, to carry out its many complex functions. Given the importance of oxygen to the brain, you can think of perfusion levels, measured by fMRI, as a stand-in measure for neural activity.

There are two things that are new about zoomed 7T perfusion fMRI. For one, it uses the first ultrahigh magnetic field imaging scanner approved by the Food and Drug Administration. The technology also has high sensitivity for detecting blood flow changes in tiny arteries and capillaries throughout the many layers of the cortex [2].

Compared to previous MRI methods with weaker magnets, the new technique can measure blood flow on a fine-grained scale, enabling scientists to remove unwanted signals (“noise”) such as those from surface-level arteries and veins. Getting an accurate read-out of activity from region to region across cortical layers can help scientists understand human brain function in greater detail in health and disease.

Having shown that the technology works as expected during relatively mundane hand movements, Wang and his team are now developing the approach for fine-grained 3D mapping of brain activity throughout the many layers of the brain. This type of analysis, known as mesoscale mapping, is key to understanding dynamic activities of neural circuits that connect brain cells across cortical layers and among brain regions.

Decoding circuits, and ultimately rewiring them, is a major goal of NIH’s BRAIN Initiative. Zoomed 7T perfusion fMRI gives us a window into 4D biology, which is the ability to watch 3D objects over time scales in which life happens, whether it’s playing an elaborate drum roll or just tapping your fingers.

References:

[1] Neuroanatomical localization of the ‘precentral knob’ with computed tomography imaging. Park MC, Goldman MA, Park MJ, Friehs GM. Stereotact Funct Neurosurg. 2007;85(4):158-61.

[2]. Laminar perfusion imaging with zoomed arterial spin labeling at 7 Tesla. Shao X, Guo F, Shou Q, Wang K, Jann K, Yan L, Toga AW, Zhang P, Wang D.J.J bioRxiv 2021.04.13.439689.

Links:

Brain Basics: Know Your Brain (National Institute of Neurological Disorders and Stroke)

Laboratory of Functional MRI Technology (University of Southern California Mark and Mary Stevens Neuroimaging and Informatics Institute)

The Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Show Us Your BRAINs! Photo and Video Contest (BRAIN Initiative)

NIH Support: National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering; Office of the Director


Precision Deep Brain Stimulation Shows Initial Promise for Severe Depression

Posted on by

Caption: Implanted deep brain stimulation with one lead (blue) in the amygdala, and the other lead (red) in the ventral capsule/ventral striatum. Credit: Ken Probst, University of California, San Francisco

For many people struggling with depression, antidepressants and talk therapy can help to provide relief. But for some, the treatments don’t help nearly enough. I’m happy to share some early groundbreaking research in alleviating treatment-resistant depression in a whole new way: implanting a pacemaker-like device capable of delivering therapeutic electrical impulses deep into the brain, aiming for the spot where they can reset the depression circuit.

What’s so groundbreaking about the latest approach—so far, performed in just one patient—is that the electrodes didn’t simply deliver constant electrical stimulation. The system could recognize the specific pattern of brain activity associated with the patient’s depressive symptoms and deliver electrical impulses to the brain circuit where it could provide the most relief.

While much more study is needed, this precision approach to deep brain stimulation (DBS) therapy offered immediate improvement to the patient, a 36-year-old woman who’d suffered from treatment-resistant major depressive disorder since childhood. Her improvement has lasted now for more than a year.

This precision approach to DBS has its origins in clinical research supported through NIH’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative. A team, led by Edward Chang, a neurosurgeon at the University of California San Francisco’s (UCSF) Epilepsy Center, discovered while performing DBS that the low mood in some patients with epilepsy before surgery was associated with stronger activity in a “subnetwork” deep within the brain’s neural circuitry. The subnetwork involved crosstalk between the brain’s amygdala, which mediates fear and other emotions, and the hippocampus, which aids in memory.

Researchers led by Andrew Krystal, UCSF, Weill Institute for Neurosciences, attempted in the latest work to translate this valuable lead into improved care for depression. Their results were published recently in the journal Nature Medicine [1].

Krystal and colleagues, including Chang and Katherine Scangos, who is the first author of the new study, began by mapping patterns of brain activity in the patient that was associated with the onset of her low moods. They then customized an FDA-approved DBS device to respond only when it recognized those specific patterns. Called NeuroPace® RNS®, the device includes a small neurostimulator and measures about 6 by 3 centimeters, allowing it to be fully implanted inside a person’s skull. There, it continuously monitors brain activity and can deliver electrical stimulation via two leads, as shown in the image above [2].

Researchers found they could detect and predict high symptom severity best in the amygdala, as previously reported. The next question was where the electrical stimulation would best relieve those troubling brain patterns and associated symptoms. They discovered that stimulation in the brain’s ventral capsule/ventral striatum, part of the brain’s circuitry for decision-making and reward-related behavior, led to the most consistent and sustained improvements. Based on these findings, the team devised an on-demand and immediate DBS therapy that was unique to the patient’s condition.

It will be important to learn whether this precision approach to DBS is broadly effective for managing treatment-resistant depression and perhaps other psychiatric conditions. It will take much more study and time before such an approach to treating depression can become more widely available. Also, it is not yet clear just how much it would cost. But these remarkable new findings certainly point the way toward a promising new approach that will hopefully one day bring another treatment option for those in need of relief from severe depression.

References:

[1] Closed-loop neuromodulation in an individual with treatment-resistant depression. Scangos KW, Khambhati AN, Daly PM, Makhoul GS, Sugrue LP, Zamanian H, Liu TX, Rao VR, Sellers KK, Dawes HE, Starr PA, Krystal AD, Chang EF. Nat Med. 2021 Oct;27(10):1696-1700

[2] The NeuroPace® RNS® System for responsive neurostimulation, NIH BRAIN Initiative.

Links:

Depression (National Institute of Mental Health/NIH)

Deep Brain Stimulation for Parkinson’s Disease and other Movement Disorders (National Institute of Neurological Disorders and Stroke/NIH)

Andrew Krystal (University of California San Francisco)

Katherine Scangos (UCSF)

Edward Chang (UCSF)

NIH Support: National Institute of Neurological Disorders and Stroke


Next Page