Skip to main content

epilepsy

How Our Brains Replay Memories

Posted on by

Retrieving a Memory
Caption: Encoding and replaying learned memory. Left panel shows the timed sequence of neurons firing in a part of a person’s brain involved in memory as it encodes the random pair of words, “crow” and “jeep.” Colors are assigned to different neurons to differentiate their firing within the sequence. Right panel shows a highly similar timed sequence of those same neurons firing just before a person given the word “jeep,” recalled and said the correct answer “crow.” Credit: Vaz AP, Science, 2020.

Note to my blog readers: the whole world is now facing a major threat from the COVID-19 pandemic. We at NIH are doing everything we can to apply the best and most powerful science to the development of diagnostics, therapeutics, and vaccines, while also implementing public health measures to protect our staff and the patients in our hospital. This crisis is expected to span many weeks, and I will occasionally report on COVID-19 in this blog format. Meanwhile, science continues to progress on many other fronts—and so I will continue to try to bring you stories across a wide range of topics. Perhaps everyone can use a little break now and then from the coronavirus news? Today’s blog takes you into the intricacies of memory.

When recalling the name of an acquaintance, you might replay an earlier introduction, trying to remember the correct combination of first and last names. (Was it Scott James? Or James Scott?) Now, neuroscientists have found that in the split second before you come up with the right answer, your brain’s neurons fire in the same order as when you first learned the information [1].

This new insight into memory retrieval comes from recording the electrical activity of thousands of neurons in the brains of six people during memory tests of random word pairs, such as “jeep” and “crow.” While similar firing patterns had been described before in mice, the new study is the first to confirm that the human brain stores memories in specific sequences of neural activity that can be replayed again and again.

The new study, published in the journal Science, is the latest insight from neurosurgeon and researcher Kareem Zaghloul at NIH’s National Institute of Neurological Disorders and Stroke (NINDS). Zaghloul’s team has for years been involved in an NIH Clinical Center study for patients with drug-resistant epilepsy whose seizures cannot be controlled with drugs.

As part of this work, his surgical team often temporarily places a 4 millimeter-by-4 millimeter array of tiny electrodes on the surface of the brains of the study’s participants. They do this in an effort to pinpoint brain tissues that may be the source of their seizures before performing surgery to remove them. With a patient’s informed consent to take part in additional research, the procedure also has led to a series of insights into what happens in the human brain when we make and later retrieve new memories.

Here’s how it works: The researchers record electrical currents as participants are asked to learn random word pairs presented to them on a computer screen, such as “cake” and “fox,” or “lime” and “camel.” After a period of rest, their brain activity is again recorded as they are given a word and asked to recall the matching word.

Last year, the researchers reported that the split second before a person got the right answer, tiny ripples of electrical activity appeared in two specific areas of the brain [2]. The team also had shown that, when a person correctly recalled a word pair, the brain showed patterns of activity that corresponded to those formed when he or she first learned to make a word association.

The new work takes this a step further. As study participants learned a word pair, the researchers noticed not only the initial rippling wave of electricity, but also that particular neurons in the brain’s cerebral cortex fired repeatedly in a sequential order. In fact, with each new word pair, the researchers observed unique firing patterns among the active neurons.

If the order of neuronal firing was essential for storing new memories, the researchers reasoned that the same would be true for correctly retrieving the information. And, indeed, that’s what they were able to show. For example, when individuals were shown “cake” for a second time, they replayed a very similar firing pattern to the one recorded initially for this word just milliseconds before correctly recalling the paired word “fox.”

The researchers then calculated the average sequence similarity between the firing patterns of learning and retrieval. They found that as a person recalled a word, those patterns gradually became more similar. Just before a correct answer was given, the recorded neurons locked onto the right firing sequence. That didn’t happen when a person gave an incorrect answer.

Further analysis confirmed that the exact order of neural firing was specific to each word pair. The findings show that our memories are encoded as unique sequences that must be replayed for accurate retrieval, though we still don’t understand the molecular mechanisms that undergird this.

Zaghloul reports that there’s still more to learn about how these processes are influenced by other factors such as our attention. It’s not yet known whether the brain replays sequences similarly when retrieving longer-term memories. Along with these intriguing insights into normal learning and memory, the researchers think this line of research will yield important clues as to what changes in people who suffer from memory disorders, with potentially important implications for developing the next generation of treatments.

Reference:

[1] Replay of cortical spiking sequences during human memory retrieval. Vaz AP, Wittig JH Jr, Inati SK, Zaghloul KA. Science. 2020 Mar 6;367(6482):1131-1134.

[2] Coupled ripple oscillations between the medial temporal lobe and neocortex retrieve human memory. Vaz AP, Inati SK, Brunel N, Zaghloul KA. Science. 2019 Mar 1;363(6430):975-978.

Links:

Epilepsy Information Page (National Institute of Neurological Disorders and Stroke/NIH)

Brain Basics (NINDS)

Zaghloul Lab (NINDS)

NIH Support: National Institute of Neurological Disorders and Stroke; National Institute of General Medical Sciences


Seeing the Cytoskeleton in a Whole New Light

Posted on by

It’s been 25 years since researchers coaxed a bacterium to synthesize an unusual jellyfish protein that fluoresced bright green when irradiated with blue light. Within months, another group had also fused this small green fluorescent protein (GFP) to larger proteins to make their whereabouts inside the cell come to light—like never before.

To mark the anniversary of this Nobel Prize-winning work and show off the rainbow of color that is now being used to illuminate the inner workings of the cell, the American Society for Cell Biology (ASCB) recently held its Green Fluorescent Protein Image and Video Contest. Over the next few months, my blog will feature some of the most eye-catching entries—starting with this video that will remind those who grew up in the 1980s of those plasma balls that, when touched, light up with a simulated bolt of colorful lightning.

This video, which took third place in the ASCB contest, shows the cytoskeleton of a frequently studied human breast cancer cell line. The cytoskeleton is made from protein structures called microtubules, made visible by fluorescently tagging a protein called doublecortin (orange). Filaments of another protein called actin (purple) are seen here as the fine meshwork in the cell periphery.

The cytoskeleton plays an important role in giving cells shape and structure. But it also allows a cell to move and divide. Indeed, the motion in this video shows that the complex network of cytoskeletal components is constantly being organized and reorganized in ways that researchers are still working hard to understand.

Jeffrey van Haren, Erasmus University Medical Center, Rotterdam, the Netherlands, shot this video using the tools of fluorescence microscopy when he was a postdoctoral researcher in the NIH-funded lab of Torsten Wittman, University of California, San Francisco.

All good movies have unusual plot twists, and that’s truly the case here. Though the researchers are using a breast cancer cell line, their primary interest is in the doublecortin protein, which is normally found in association with microtubules in the developing brain. In fact, in people with mutations in the gene that encodes this protein, neurons fail to migrate properly during development. The resulting condition, called lissencephaly, leads to epilepsy, cognitive disability, and other neurological problems.

Cancer cells don’t usually express doublecortin. But, in some of their initial studies, the Wittman team thought it would be much easier to visualize and study doublecortin in the cancer cells. And so, the researchers tagged doublecortin with an orange fluorescent protein, engineered its expression in the breast cancer cells, and van Haren started taking pictures.

This movie and others helped lead to the intriguing discovery that doublecortin binds to microtubules in some places and not others [1]. It appears to do so based on the ability to recognize and bind to certain microtubule geometries. The researchers have since moved on to studies in cultured neurons.

This video is certainly a good example of the illuminating power of fluorescent proteins: enabling us to see cells and their cytoskeletons as incredibly dynamic, constantly moving entities. And, if you’d like to see much more where this came from, consider visiting van Haren’s Twitter gallery of microtubule videos here:

Reference:

[1] Doublecortin is excluded from growing microtubule ends and recognizes the GDP-microtubule lattice. Ettinger A, van Haren J, Ribeiro SA, Wittmann T. Curr Biol. 2016 Jun 20;26(12):1549-1555.

Links:

Lissencephaly Information Page (National Institute of Neurological Disorders and Stroke/NIH)

Wittman Lab (University of California, San Francisco)

Green Fluorescent Protein Image and Video Contest (American Society for Cell Biology, Bethesda, MD)

NIH Support: National Institute of General Medical Sciences


Looking for Answers to Epilepsy in a Blood Test

Posted on by

Gemma Carvill and lab members
Gemma Carvill (second from right) with members of her lab. Courtesy of Gemma Carvill

Millions of people take medications each day for epilepsy, a diverse group of disorders characterized by seizures. But, for about a third of people with epilepsy, current drug treatments don’t work very well. What’s more, the medications are designed to treat symptoms of these disorders, basically by suppressing seizure activity. The medications don’t really change the underlying causes, which are wired deep within the brain.

Gemma Carvill, a researcher at Northwestern University Feinberg School of Medicine, Chicago, wants to help change that in the years ahead. She’s dedicated her research career to discovering the genetic causes of epilepsy in hopes of one day designing treatments that can control or even cure some forms of the disorder [1].

It certainly won’t be easy. A recent paper put the number of known genes associated with epilepsy at close to 1,000 [2]. However, because some disease-causing genetic variants may arise during development, and therefore occur only within the brain, it’s possible that additional genetic causes of epilepsy are still waiting to be discovered within the billions of cells and their trillions of interconnections.

To find these new leads, Carvill won’t have to rely only on biopsies of brain tissue. She’s received a 2018 NIH Director’s New Innovator Award in search of answers hidden within “liquid biopsies”—tiny fragments of DNA that research in other forms of brain injury and neurological disease [3] suggests may spill into the bloodstream and cerebrospinal fluid (CSF) from dying neurons or other brain cells following a seizure.

Carvill and team will start with mouse models of epilepsy to test whether it’s possible to detect DNA fragments from the brain in bodily fluids after a seizure. They’ll also attempt to show DNA fragments carry telltale signatures indicating from which cells and tissues in the brain those molecules originate. The hope is these initial studies will also tell them the best time after a seizure to collect blood samples.

In people, Carvill’s team will collect the DNA fragments and begin searching for genetic alterations to explain the seizures, capitalizing on Carvill’s considerable expertise in the use of next generation DNA sequencing technology for ferreting out disease-causing variants. Importantly, if this innovative work in epilepsy pans out, it also can be applied to any other neurological condition in which DNA spills from dying brain cells, including Alzheimer’s disease and Parkinson’s disease.

References:

[1] Unravelling the genetic architecture of autosomal recessive epilepsy in the genomic era. Calhoun JD, Carvill GL. J Neurogenet. 2018 Sep 24:1-18.

[2] Epilepsy-associated genes. Wang J, Lin ZJ, Liu L, Xu HQ, Shi YW, Yi YH, He N, Liao WP. Seizure. 2017 Jan;44:11-20.

[3] Identification of tissue-specific cell death using methylation patterns of circulating DNA. Lehmann-Werman R, Neiman D, Zemmour H, Moss J, Magenheim J, Vaknin-Dembinsky A, Rubertsson S, Nellgård B, Blennow K, Zetterberg H, Spalding K, Haller MJ, Wasserfall CH, Schatz DA, Greenbaum CJ, Dorrell C, Grompe M, Zick A, Hubert A, Maoz M, Fendrich V, Bartsch DK, Golan T, Ben Sasson SA, Zamir G, Razin A, Cedar H, Shapiro AM, Glaser B, Shemer R, Dor Y. Proc Natl Acad Sci U S A. 2016 Mar 29;113(13):E1826-34.

Links:

Epilepsy Information Page (National Institute of Neurological Disorders and Stroke/NIH)

Gemma Carvill Lab (Northwestern University Feinberg School of Medicine, Chicago)

Carvill Project Information (NIH RePORTER)

NIH Director’s New Innovator Award (Common Fund)

NIH Support: Common Fund; National Institute of Neurological Disorders and Stroke


The Brain Ripples Before We Remember

Posted on by

Ripple brain
Credit: Thinkstock

Throw a stone into a quiet pond, and you’ll see ripples expand across the water from the point where it went in. Now, neuroscientists have discovered that a different sort of ripple—an electrical ripple—spreads across the human brain when it strives to recall memories.

In memory games involving 14 very special volunteers, an NIH-funded team found that the split second before a person nailed the right answer, tiny ripples of electrical activity appeared in two specific areas of the brain [1]. If the volunteer recalled an answer incorrectly or didn’t answer at all, the ripples were much less likely to appear. While many questions remain, the findings suggest that the short, high-frequency electrical waves seen in these brain ripples may play an unexpectedly important role in our ability to remember.

The new study, published in Science, builds on brain recording data compiled over the last several years by neurosurgeon and researcher Kareem Zaghloul at NIH’s National Institute of Neurological Disorders and Stroke (NINDS). Zaghloul’s surgical team often temporarily places 10-to-20 arrays of tiny electrodes into the brains of a people with drug-resistant epilepsy. As I’ve highlighted recently, the brain mapping procedure aims to pinpoint the source of a patient’s epileptic seizures. But, with a patient’s permission, the procedure also presents an opportunity to learn more about how the brain works, with exceptional access to its circuits.

One such opportunity is to explore how the brain stores and recalls memories. To do this, the researchers show their patient volunteers hundreds of pairs of otherwise unrelated words, such as “pencil and bishop” or “orange and navy.” Later, they show them one of the words and test their memory to recall the right match. All the while, electrodes record the brain’s electrical activity.

Previously published studies by Zaghloul’s lab [2, 3] and many others have shown that memory involves the activation of a number of brain regions. That includes the medial temporal lobe, which is involved in forming and retrieving memories, and the prefrontal cortex, which helps in organizing memories in addition to its roles in “executive functions,” such as planning and setting goals. Those studies also have highlighted a role for the temporal association cortex, another portion of the temporal lobe involved in processing experiences and words.

In their data collected in patients with epilepsy, Zaghloul’s team’s earlier studies had uncovered some telltale patterns. For instance, when a person correctly recalled a word pair, the brain showed patterns of activity that looked quite similar to those present when he or she first learned to make a word association.

Alex Vaz, one of Zaghloul’s doctoral students, thought there might be more to the story. There was emerging evidence in rodents that brain ripples—short bursts of high frequency electrical activity—are involved in learning. There was also some evidence in people that such ripples might be important for solidifying memories during sleep. Vaz wondered whether they might find evidence of ripples as well in data gathered from people who were awake.

Vaz’s hunch was correct. The reanalysis revealed ripples of electricity in the medial temporal lobe and the temporal association cortex. When a person correctly recalled a word pair, those two brain areas rippled at the same time.

Further analysis showed that the ripples appeared in those two areas a few milliseconds before a volunteer remembered a word and gave a correct answer. Your brain is working on finding an answer before you are fully aware of it! Those ripples also appear to trigger brain waves that look similar to those observed in the association cortex when a person first learned a word pair.

The finding suggests that ripples in this part of the brain precede and may help to prompt the larger brain waves associated with replaying and calling to mind a particular memory. For example, hearing the words, “The Fab Four” may ripple into a full memory of a favorite Beatles album (yes! Sgt. Pepper’s Lonely Hearts Club Band) or, if you were lucky enough, a memorable concert back in the day (I never had that chance).

Zaghloul’s lab continues to study the details of these ripples to learn even more about how they may influence other neural signals and features involved in memory. So, the next time you throw a stone into a quiet pond and watch the ripples, perhaps it will trigger an electrical ripple in your brain to remember this blog and ruminate about this fascinating new discovery in neuroscience.

References:

[1] Coupled ripple oscillations between the medial temporal lobe and neocortex retrieve human memory. Vaz AP, Inati SK, Brunel N, Zaghloul KA. Science. 2019 Mar 1;363(6430):975-978.

[2] Cued Memory Retrieval Exhibits Reinstatement of High Gamma Power on a Faster Timescale in the Left Temporal Lobe and Prefrontal Cortex. Yaffe RB, Shaikhouni A, Arai J, Inati SK, Zaghloul KA. J Neurosci. 2017 Apr 26;37(17):4472-4480.

[3] Human Cortical Neurons in the Anterior Temporal Lobe Reinstate Spiking Activity during Verbal Memory Retrieval. Jang AI, Wittig JH Jr, Inati SK, Zaghloul KA. Curr Biol. 2017 Jun 5;27(11):1700-1705.e5.

Links:

Epilepsy Information Page (National Institute of Neurological Disorders and Stroke/NIH)

Brain Basics (NINDS)

Zaghloul Lab (NINDS)

NIH Support: National Institute of Neurological Disorders and Stroke; National Institute of General Medical Sciences


Can a Mind-Reading Computer Speak for Those Who Cannot?

Posted on by

Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York

Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.

Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.

When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.

Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.

As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech [1]. To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.

In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.

From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.

Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.

In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.

The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!

Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.

Reference:

[1] Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.

Links:

Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.

Nima Mesgarani (Columbia University, New York)

NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


Next Page