Skip to main content

Cool Videos

Taking Brain Imaging Even Deeper

Posted on by

Thanks to yet another amazing advance made possible by the NIH-led supported the Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, I can now take you on a 3D fly-through of all six layers of the part of the mammalian brain that processes external signals into vision. This unprecedented view is made possible by three-photon microscopy, a low-energy imaging approach that is allowing researchers to peer deeply within the brains of living creatures without damaging or killing their brain cells.

The basic idea of multi-photon microscopy is this: for fluorescence microscopy to work, you want to deliver a specific energy level of photons (usually with a laser) to excite a fluorescent molecule, so that it will emit light at a slightly lower energy (longer wavelength) and be visualized as a burst of colored light in the microscope. That’s how fluorescence works. Green fluorescent protein (GFP) is one of many proteins that can be engineered into cells or mice to make that possible.

But for that version of the approach to work on tissue, the excited photons need to penetrate deeply, and that’s not possible for such high energy photons. So two-photon strategies were developed, where it takes the sum of the energy of two simultaneous photons to hit the target in order to activate the fluorophore.

That approach has made a big difference, but for deep tissue penetration the photons are still too high in energy. Enter the three-photon version! Now the even lower energy of the photons makes tissue more optically transparent, though for activation of the fluorescent protein, three photons have to hit it simultaneously. But that’s part of the beauty of the system—the visual “noise” also goes down.

This particular video shows what takes place in the visual cortex of mice when objects pass before their eyes. As the objects appear, specific neurons (green) are activated to process the incoming information. Nearby, and slightly obscuring the view, are the blood vessels (pink, violet) that nourish the brain. At 33 seconds into the video, you can see the neurons’ myelin sheaths (pink) branching into the white matter of the brain’s subplate, which plays a key role in organizing the visual cortex during development.

This video comes from a recent paper in Nature Communications by a team from Massachusetts Institute of Technology, Cambridge [1]. To obtain this pioneering view of the brain, Mriganka Sur, Murat Yildirim, and their colleagues built an innovative microscope that emits three low-energy photons. After carefully optimizing the system, they were able to peer more than 1,000 microns (0.05 inches) deep into the visual cortex of a live, alert mouse, far surpassing the imaging capacity of standard one-photon microscopy (100 microns) and two-photon microscopy (400-500 microns).

This improved imaging depth allowed the team to plumb all six layers of the visual cortex (two-photon microscopy tops out at about three layers), as well as to record in real time the brain’s visual processing activities. Helping the researchers to achieve this feat was the availability of a genetically engineered mouse model in which the cells of the visual cortex are color labelled to distinguish blood vessels from neurons, and to show when neurons are active.

During their in-depth imaging experiments, the MIT researchers found that each of the visual cortex’s six layers exhibited different responses to incoming visual information. One of the team’s most fascinating discoveries is that neurons residing on the subplate are actually quite active in adult animals. It had been assumed that these subplate neurons were active only during development. Their role in mature animals is now an open question for further study.

Sur often likens the work in his neuroscience lab to astronomers and their perpetual quest to see further into the cosmos—but his goal is to see ever deeper into the brain. His group, along with many other researchers supported by the BRAIN Initiative, are indeed proving themselves to be biological explorers of the first order.

Reference:

[1] Functional imaging of visual cortical layers and subplate in awake mice with optimized three-photon microscopy. Yildirim M, Sugihara H, So PTC, Sur M. Nat Commun. 2019 Jan 11;10(1):177.

Links:

Sur Lab (Massachusetts Institute of Technology, Cambridge)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Eye Institute; National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering


Can a Mind-Reading Computer Speak for Those Who Cannot?

Posted on by

Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York

Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.

Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.

When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.

Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.

As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech [1]. To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.

In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.

From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.

Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.

In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.

The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!

Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.

Reference:

[1] Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.

Links:

Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.

Nima Mesgarani (Columbia University, New York)

NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


Mammalian Brain Like You’ve Never Seen It Before

Posted on by

Credit: Gao et. al, Science

Researchers are making amazing progress in developing new imaging approaches. And they are now using one of their latest creations, called ExLLSM, to provide us with jaw-dropping views of a wide range of biological systems, including the incredibly complex neural networks within the mammalian brain.

In this video, ExLLSM takes us on a super-resolution, 3D voyage through a tiny sample (0.0030 inches thick) from the part of the mouse brain that processes sensation, the primary somatosensory cortex. The video zooms in and out of densely packed pyramidal neurons (large yellow cell bodies), each of which has about 7,000 synapses, or connections. You can also see presynapses (cyan), the part of the neuron that sends chemical signals; and postsynapes (magenta), the part of the neuron that receives chemical signals.

At 1:45, the video zooms in on dendritic spines, which are mushroom-like nubs on the neuronal branches (yellow). These structures, located on the tips of dendrites, receive incoming signals that are turned into electrical impulses. While dendritic spines have been imaged in black and white with electron microscopy, they’ve never been presented before on such a vast, colorful scale.

The video comes from a paper, published recently in the journal Science [1], from the labs of Ed Boyden, Massachusetts Institute of Technology, Cambridge, and the Nobel Prize-winning Eric Betzig, Janelia Research Campus of the Howard Hughes Medical Institute, Ashburn, VA. Like many collaborations, this one comes with a little story.

Four years ago, the Boyden lab developed expansion microscopy (ExM). The technique involves infusing cells with a hydrogel, made from a chemical used in disposable diapers. The hydrogel expands molecules within the cell away from each other, usually by about 4.5 times, but still locks them into place for remarkable imaging clarity. It makes structures visible by light microscopy that are normally below the resolution limit.

Though the expansion technique has worked well with a small number of cells under a standard light microscope, it hasn’t been as successful—until now—at imaging thicker tissue samples. That’s because thicker tissue is harder to illuminate, and flooding the specimen with light often bleaches out the fluorescent markers that scientists use to label proteins. The signal just fades away.

For Boyden, that was a problem that needed to be solved. Because his lab’s goal is to trace the inner workings of the brain in unprecedented detail, Boyden wants to image entire neural circuits in relatively thick swaths of tissue, not just look at individual cells in isolation.

After some discussion, Boyden’s team concluded that the best solution might be to swap out the light source for the standard microscope with a relatively new imaging tool developed in the Betzig lab. It’s called lattice light-sheet microscopy (LLSM), and the tool generates extremely thin sheets of light that illuminate tissue only in a very tightly defined plane, dramatically reducing light-related bleaching of fluorescent markers in the tissue sample. This allows LLSM to extend its range of image acquisition and quickly deliver stunningly vivid pictures.

Telephone calls were made, and the Betzig lab soon welcomed Ruixuan Gao, Shoh Asano, and colleagues from the Boyden lab to try their hand at combining the two techniques. As the video above shows, ExLLSM has proved to be a perfect technological match. In addition to the movie above, the team has used ExLLSM to provide unprecedented views of a range of samples—from human kidney to neuron bundles in the brain of the fruit fly.

Not only is ExLLSM super-resolution, it’s also super-fast. In fact, the team imaged the entire fruit fly brain in 2 1/2 days—an effort that would take years using an electron microscope.

ExLLSM will likely never supplant the power of electron microscopy or standard fluorescent light microscopy. Still, this new combo imaging approach shows much promise as a complementary tool for biological exploration. The more innovative imaging approaches that researchers have in their toolbox, the better for our ongoing efforts to unlock the mysteries of the brain and other complex biological systems. And yes, those systems are all complex. This is life we’re talking about!

Reference:

[1] Cortical column and whole-brain imaging with molecular contrast and nanoscale resolution. Gao R, Asano SM, Upadhyayula S, Pisarev I, Milkie DE, Liu TL, Singh V, Graves A, Huynh GH, Zhao Y, Bogovic J, Colonell J, Ott CM, Zugates C, Tappan S, Rodriguez A, Mosaliganti KR, Sheu SH, Pasolli HA, Pang S, Xu CS, Megason SG, Hess H, Lippincott-Schwartz J, Hantman A, Rubin GM, Kirchhausen T, Saalfeld S, Aso Y, Boyden ES, Betzig E. Science. 2019 Jan 18;363(6424).

Links:

Video: Expansion Microscopy Explained (YouTube)

Video: Lattice Light-Sheet Microscopy (YouTube)

How to Rapidly Image Entire Brains at Nanoscale Resolution, Howard Hughes Medical Institute, January 17, 2019.

Synthetic Neurobiology Group (Massachusetts Institute of Technology, Cambridge)

Eric Betzig (Janelia Reseach Campus, Ashburn, VA)

NIH Support: National Institute of Neurological Disorders and Stroke; National Human Genome Research Institute; National Institute on Drug Abuse; National Institute of Mental Health; National Institute of Biomedical Imaging and Bioengineering


Mapping the Brain’s Memory Bank

Posted on by

There’s a lot of groundbreaking research now underway to map the organization and internal wiring of the brain’s hippocampus, essential for memory, emotion, and spatial processing. This colorful video depicting a mouse hippocampus offers a perfect case in point.

The video presents the most detailed 3D atlas of the hippocampus ever produced, highlighting its five previously defined zones: dentate gyrus, CA1, CA2, CA3, and subiculum. The various colors within those zones represent areas with newly discovered and distinctive patterns of gene expression, revealing previously hidden layers of structural organization.

For instance, the subiculum, which sends messages from the hippocampus to other parts of the brain, includes several subregions. The subregions include the three marked in red, yellow, and blue at about 23 seconds into the video.

How’d the researchers do it? In the new study, published in Nature Neuroscience, the researchers started with the Allen Mouse Brain Atlas, a rich, publicly accessible 3D atlas of gene expression in the mouse brain. The team, led by Hong-Wei Dong, University of Southern California, Los Angeles, drilled down into the data to pull up 258 genes that are differentially expressed in the hippocampus and might be helpful for mapping purposes.

Some of those 258 genes were generally expressed only in previously defined portions of the hippocampus. Others were “turned on” only in discrete portions of known hippocampal domains, leading the researchers to define 20 distinct subregions that hadn’t been recognized before.

Combining these data, sophisticated analytical tools, and plenty of hard work, the team assembled this detailed atlas, together with connectivity data, to create a detailed wiring diagram. It includes about 200 signaling pathways that show how all those subregions network together and with other portions of the brain.

What’s really interesting is that the data also showed that these components of the hippocampus contribute to three relatively independent brain-wide communication networks. While much more study is needed, those three networks appear to relate to distinct functions of the hippocampus, including spatial navigation, social behaviors, and metabolism.

This more-detailed view of the hippocampus is just the latest from the NIH-funded Mouse Connectome Project. The ongoing project aims to create a complete connectivity atlas for the entire mouse brain.

The Mouse Connectome Project isn’t just for those with an interest in mice. Indeed, because the mouse and human brain are similarly organized, studies in the smaller mouse brain can help to provide a template for making sense of the larger and more complex human brain, with its tens of billions of interconnected neurons.

Ultimately, the hope is that this understanding of healthy brain connections will provide clues for better treating the brain’s abnormal connections and/or disconnections. They are involved in numerous neurological conditions, including Alzheimer’s disease, Parkinson’s disease, and autism spectrum disorder.

Reference:

[1] Integration of gene expression and brain-wide connectivity reveals the multiscale organization of mouse hippocampal networks. Bienkowski MS, Bowman I, Song MY, Gou L, Ard T, Cotter K, Zhu M, Benavidez NL, Yamashita S, Abu-Jaber J, Azam S, Lo D, Foster NN, Hintiryan H, Dong HW. Nat Neurosci. 2018 Nov;21(11):1628-1643.

Links:
Mouse Connectome Project (University of Southern California, Los Angeles)

Human Connectome Project (USC)

Allen Brain Map (Allen Institute, Seattle)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Institute of Mental Health; National Cancer Institute


Halloween Fly-Through of a Mouse Skull

Posted on by

Credit: Chai Lab, University of Southern California, Los Angeles

Halloween is full of all kinds of “skulls”—from spooky costumes to ghoulish goodies. So, in keeping with the spirit of the season, I’d like to share this eerily informative video that takes you deep inside the real thing.


Next Page