Skip to main content

neuroscience

Honoring a Champion of Biomedical Research

Posted on by

Dr. Francis Collins poses with John Edward Porter in front of a wall display honoring Mr. Porter
It was my great pleasure on May 9, 2019 to help dedicate a new exhibit honoring Congressman John Edward Porter (left) for his strong leadership on behalf of NIH research. The exhibit is located at the Porter Neuroscience Research Center on NIH’s Bethesda, MD campus. Credit: NIH

Finding Beauty in the Nervous System of a Fruit Fly Larva

Posted on by

Wow! Click on the video. If you’ve ever wondered where those pesky flies in your fruit bowl come from, you’re looking at it right now. It’s a fruit fly larva. And this 3D movie offers never-before-seen details into proprioception—the brain’s sixth sense of knowing the body’s location relative to nearby objects or, in this case, fruit.

This live-action video highlights the movement of the young fly’s proprioceptive nerve cells. They send signals to the fly brain that are essential for tracking the body’s position in space and coordinating movement. The colors indicate the depth of the nerve cells inside the body, showing those at the surface (orange) and those further within (blue).

Such movies make it possible, for the first time, to record precisely how every one of these sensory cells is arranged within the body. They also provide a unique window into how body positions are dynamically encoded in these cells, as a segmented larva inches along in search of food.

The video was created using a form of confocal microscopy called Swept Confocally Aligned Planar Excitation, or SCAPE. It captures 3D images by sweeping a sheet of laser light back and forth across a living sample. Even better, it does this while the microscope remains completely stationary—no need for a researcher to move any lenses up or down, or hold a live sample still.

Most impressively, with this new high-speed technology, developed with support from the NIH’s BRAIN Initiative, researchers are now able to capture videos like the one seen above in record time, with each whole volume recorded in under 1/10th of a second! That’s hundreds of times faster than with a conventional microscope, which scans objects point by point.

As reported in Current Biology, the team, led by Elizabeth Hillman and Wesley Grueber, Columbia University, New York, didn’t stop at characterizing the structural details and physical movements of nerve cells involved in proprioception in a crawling larva. In another set of imaging experiments, they went a step further, capturing faint flashes of green in individual labeled nerve cells each time they fired. (You have to look very closely to see them.) With each wave of motion, proprioceptive nerve cells light up in sequence, demonstrating precisely when they are sending signals to the animal’s brain.

From such videos, the researchers have generated a huge amount of data on the position and activity of each proprioceptive nerve cell. The data show that the specific position of each cell makes it uniquely sensitive to changes in position of particular segments of a larva’s body. While most of the proprioceptive nerve cells fired when their respective body segment contracted, others were attuned to fire when a larval segment stretched.

Taken together, the data show that proprioceptive nerve cells provide the brain with a detailed sequence of signals, reflecting each part of a young fly’s undulating body. It’s clear that every proprioceptive neuron has a unique role to play in the process. The researchers now will create similar movies capturing neurons in the fly’s central nervous system.

A holy grail of the BRAIN Initiative is to capture the brain in action. With these advances in imaging larval flies, researchers are getting ever closer to understanding the coordinated activities of an organism’s complete nervous system—though this one is a lot simpler than ours! And perhaps this movie—and the anticipation of the sequels to come—may even inspire a newfound appreciation for those pesky flies that sometimes hover nearby.

Reference:

[1] Characterization of Proprioceptive System Dynamics in Behaving Drosophila Larvae Using High-Speed Volumetric Microscopy. Vaadia RD, Li W, Voleti V, Singhania A, Hillman EMC, Grueber WB. Curr Biol. 2019 Mar 18;29(6):935-944.e4.

Links:

Using Research Organisms to Study Health and Disease (National Institute of General Medical Sciences/NIH)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Hillman Lab (Columbia University, New York)

Grueber Lab (Columbia University, New York)

NIH Support: National Institute of Neurological Disorders and Stroke; Eunice Kennedy Shriver National Institute of Child Health and Human Development


The Brain Ripples Before We Remember

Posted on by

Ripple brain
Credit: Thinkstock

Throw a stone into a quiet pond, and you’ll see ripples expand across the water from the point where it went in. Now, neuroscientists have discovered that a different sort of ripple—an electrical ripple—spreads across the human brain when it strives to recall memories.

In memory games involving 14 very special volunteers, an NIH-funded team found that the split second before a person nailed the right answer, tiny ripples of electrical activity appeared in two specific areas of the brain [1]. If the volunteer recalled an answer incorrectly or didn’t answer at all, the ripples were much less likely to appear. While many questions remain, the findings suggest that the short, high-frequency electrical waves seen in these brain ripples may play an unexpectedly important role in our ability to remember.

The new study, published in Science, builds on brain recording data compiled over the last several years by neurosurgeon and researcher Kareem Zaghloul at NIH’s National Institute of Neurological Disorders and Stroke (NINDS). Zaghloul’s surgical team often temporarily places 10-to-20 arrays of tiny electrodes into the brains of a people with drug-resistant epilepsy. As I’ve highlighted recently, the brain mapping procedure aims to pinpoint the source of a patient’s epileptic seizures. But, with a patient’s permission, the procedure also presents an opportunity to learn more about how the brain works, with exceptional access to its circuits.

One such opportunity is to explore how the brain stores and recalls memories. To do this, the researchers show their patient volunteers hundreds of pairs of otherwise unrelated words, such as “pencil and bishop” or “orange and navy.” Later, they show them one of the words and test their memory to recall the right match. All the while, electrodes record the brain’s electrical activity.

Previously published studies by Zaghloul’s lab [2, 3] and many others have shown that memory involves the activation of a number of brain regions. That includes the medial temporal lobe, which is involved in forming and retrieving memories, and the prefrontal cortex, which helps in organizing memories in addition to its roles in “executive functions,” such as planning and setting goals. Those studies also have highlighted a role for the temporal association cortex, another portion of the temporal lobe involved in processing experiences and words.

In their data collected in patients with epilepsy, Zaghloul’s team’s earlier studies had uncovered some telltale patterns. For instance, when a person correctly recalled a word pair, the brain showed patterns of activity that looked quite similar to those present when he or she first learned to make a word association.

Alex Vaz, one of Zaghloul’s doctoral students, thought there might be more to the story. There was emerging evidence in rodents that brain ripples—short bursts of high frequency electrical activity—are involved in learning. There was also some evidence in people that such ripples might be important for solidifying memories during sleep. Vaz wondered whether they might find evidence of ripples as well in data gathered from people who were awake.

Vaz’s hunch was correct. The reanalysis revealed ripples of electricity in the medial temporal lobe and the temporal association cortex. When a person correctly recalled a word pair, those two brain areas rippled at the same time.

Further analysis showed that the ripples appeared in those two areas a few milliseconds before a volunteer remembered a word and gave a correct answer. Your brain is working on finding an answer before you are fully aware of it! Those ripples also appear to trigger brain waves that look similar to those observed in the association cortex when a person first learned a word pair.

The finding suggests that ripples in this part of the brain precede and may help to prompt the larger brain waves associated with replaying and calling to mind a particular memory. For example, hearing the words, “The Fab Four” may ripple into a full memory of a favorite Beatles album (yes! Sgt. Pepper’s Lonely Hearts Club Band) or, if you were lucky enough, a memorable concert back in the day (I never had that chance).

Zaghloul’s lab continues to study the details of these ripples to learn even more about how they may influence other neural signals and features involved in memory. So, the next time you throw a stone into a quiet pond and watch the ripples, perhaps it will trigger an electrical ripple in your brain to remember this blog and ruminate about this fascinating new discovery in neuroscience.

References:

[1] Coupled ripple oscillations between the medial temporal lobe and neocortex retrieve human memory. Vaz AP, Inati SK, Brunel N, Zaghloul KA. Science. 2019 Mar 1;363(6430):975-978.

[2] Cued Memory Retrieval Exhibits Reinstatement of High Gamma Power on a Faster Timescale in the Left Temporal Lobe and Prefrontal Cortex. Yaffe RB, Shaikhouni A, Arai J, Inati SK, Zaghloul KA. J Neurosci. 2017 Apr 26;37(17):4472-4480.

[3] Human Cortical Neurons in the Anterior Temporal Lobe Reinstate Spiking Activity during Verbal Memory Retrieval. Jang AI, Wittig JH Jr, Inati SK, Zaghloul KA. Curr Biol. 2017 Jun 5;27(11):1700-1705.e5.

Links:

Epilepsy Information Page (National Institute of Neurological Disorders and Stroke/NIH)

Brain Basics (NINDS)

Zaghloul Lab (NINDS)

NIH Support: National Institute of Neurological Disorders and Stroke; National Institute of General Medical Sciences


Taking Brain Imaging Even Deeper

Posted on by

Thanks to yet another amazing advance made possible by the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, I can now take you on a 3D fly-through of all six layers of the part of the mammalian brain that processes external signals into vision. This unprecedented view is made possible by three-photon microscopy, a low-energy imaging approach that is allowing researchers to peer deeply within the brains of living creatures without damaging or killing their brain cells.

The basic idea of multi-photon microscopy is this: for fluorescence microscopy to work, you want to deliver a specific energy level of photons (usually with a laser) to excite a fluorescent molecule, so that it will emit light at a slightly lower energy (longer wavelength) and be visualized as a burst of colored light in the microscope. That’s how fluorescence works. Green fluorescent protein (GFP) is one of many proteins that can be engineered into cells or mice to make that possible.

But for that version of the approach to work on tissue, the excited photons need to penetrate deeply, and that’s not possible for such high energy photons. So two-photon strategies were developed, where it takes the sum of the energy of two simultaneous photons to hit the target in order to activate the fluorophore.

That approach has made a big difference, but for deep tissue penetration the photons are still too high in energy. Enter the three-photon version! Now the even lower energy of the photons makes tissue more optically transparent, though for activation of the fluorescent protein, three photons have to hit it simultaneously. But that’s part of the beauty of the system—the visual “noise” also goes down.

This particular video shows what takes place in the visual cortex of mice when objects pass before their eyes. As the objects appear, specific neurons (green) are activated to process the incoming information. Nearby, and slightly obscuring the view, are the blood vessels (pink, violet) that nourish the brain. At 33 seconds into the video, you can see the neurons’ myelin sheaths (pink) branching into the white matter of the brain’s subplate, which plays a key role in organizing the visual cortex during development.

This video comes from a recent paper in Nature Communications by a team from Massachusetts Institute of Technology, Cambridge [1]. To obtain this pioneering view of the brain, Mriganka Sur, Murat Yildirim, and their colleagues built an innovative microscope that emits three low-energy photons. After carefully optimizing the system, they were able to peer more than 1,000 microns (0.05 inches) deep into the visual cortex of a live, alert mouse, far surpassing the imaging capacity of standard one-photon microscopy (100 microns) and two-photon microscopy (400-500 microns).

This improved imaging depth allowed the team to plumb all six layers of the visual cortex (two-photon microscopy tops out at about three layers), as well as to record in real time the brain’s visual processing activities. Helping the researchers to achieve this feat was the availability of a genetically engineered mouse model in which the cells of the visual cortex are color labelled to distinguish blood vessels from neurons, and to show when neurons are active.

During their in-depth imaging experiments, the MIT researchers found that each of the visual cortex’s six layers exhibited different responses to incoming visual information. One of the team’s most fascinating discoveries is that neurons residing on the subplate are actually quite active in adult animals. It had been assumed that these subplate neurons were active only during development. Their role in mature animals is now an open question for further study.

Sur often likens the work in his neuroscience lab to astronomers and their perpetual quest to see further into the cosmos—but his goal is to see ever deeper into the brain. His group, along with many other researchers supported by the BRAIN Initiative, are indeed proving themselves to be biological explorers of the first order.

Reference:

[1] Functional imaging of visual cortical layers and subplate in awake mice with optimized three-photon microscopy. Yildirim M, Sugihara H, So PTC, Sur M. Nat Commun. 2019 Jan 11;10(1):177.

Links:

Sur Lab (Massachusetts Institute of Technology, Cambridge)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Eye Institute; National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering


Can a Mind-Reading Computer Speak for Those Who Cannot?

Posted on by

Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York

Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.

Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.

When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.

Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.

As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech [1]. To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.

In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.

From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.

Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.

In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.

The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!

Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.

Reference:

[1] Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.

Links:

Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.

Nima Mesgarani (Columbia University, New York)

NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


Next Page