Skip to main content

BRAIN Initiative

A Real-Time Look at Value-Based Decision Making

Posted on by

All of us make many decisions every day. For most things, such as which jacket to wear or where to grab a cup of coffee, there’s usually no right answer, so we often decide using values rooted in our past experiences. Now, neuroscientists have identified the part of the mammalian brain that stores information essential to such value-based decision making.

Researchers zeroed in on this particular brain region, known as the retrosplenial cortex (RSC), by analyzing movies—including the clip shown about 32 seconds into this video—that captured in real time what goes on in the brains of mice as they make decisions. Each white circle is a neuron, and the flickers of light reflect their activity: the brighter the light, the more active the neuron at that point in time.

All told, the NIH-funded team, led by Ryoma Hattori and Takaki Komiyama, University of California at San Diego, La Jolla, made recordings of more than 45,000 neurons across six regions of the mouse brain [1]. Neural activity isn’t usually visible. But, in this case, researchers used mice that had been genetically engineered so that their neurons, when activated, expressed a protein that glowed.

Their system was also set up to encourage the mice to make value-based decisions, including choosing between two drinking tubes, each with a different probability of delivering water. During this decision-making process, the RSC proved to be the region of the brain where neurons persistently lit up, reflecting how the mouse evaluated one option over the other.

The new discovery, described in the journal Cell, comes as something of a surprise to neuroscientists because the RSC hadn’t previously been implicated in value-based decisions. To gather additional evidence, the researchers turned to optogenetics, a technique that enabled them to use light to inactivate neurons in the RSC’s of living animals. These studies confirmed that, with the RSC turned off, the mice couldn’t retrieve value information based on past experience.

The researchers note that the RSC is heavily interconnected with other key brain regions, including those involved in learning, memory, and controlling movement. This indicates that the RSC may be well situated to serve as a hub for storing value information, allowing it to be accessed and acted upon when it is needed.

The findings are yet another amazing example of how advances coming out of the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative are revolutionizing our understanding of the brain. In the future, the team hopes to learn more about how the RSC stores this information and sends it to other parts of the brain. They note that it will also be important to explore how activity in this brain area may be altered in schizophrenia, dementia, substance abuse, and other conditions that may affect decision-making abilities. It will also be interesting to see how this develops during childhood and adolescence.

Reference:

[1] Area-Specificity and Plasticity of History-Dependent Value Coding During Learning. Hattori R, Danskin B, Babic Z, Mlynaryk N, Komiyama T. Cell. 2019 Jun 13;177(7):1858-1872.e15.

Links:

Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Komiyama Lab (UCSD, La Jolla)

NIH Support: National Institute of Neurological Disorders and Stroke; National Eye Institute; National Institute on Deafness and Other Communication Disorders


A Neuronal Light Show

Posted on by

Credit: Chen X, Cell, 2019

These colorful lights might look like a video vignette from one of the spectacular evening light shows taking place this holiday season. But they actually aren’t. These lights are illuminating the way to a much fuller understanding of the mammalian brain.

The video features a new research method called BARseq (Barcoded Anatomy Resolved by Sequencing). Created by a team of NIH-funded researchers led by Anthony Zador, Cold Spring Harbor Laboratory, NY, BARseq enables scientists to map in a matter of weeks the location of thousands of neurons in the mouse brain with greater precision than has ever been possible before.

How does it work? With BARseq, researchers generate uniquely identifying RNA barcodes and then tag one to each individual neuron within brain tissue. As reported recently in the journal Cell, those barcodes allow them to keep track of the location of an individual cell amid millions of neurons [1]. This also enables researchers to map the tangled paths of individual neurons from one region of the mouse brain to the next.

The video shows how the researchers read the barcodes. Each twinkling light is a barcoded neuron within a thin slice of mouse brain tissue. The changing colors from frame to frame correspond to one of the four letters, or chemical bases, in RNA (A=purple, G=blue, U=yellow, and C=white). A neuron that flashes blue, purple, yellow, white is tagged with a barcode that reads GAUC, while yellow, white, white, white is UCCC.

By sequencing and reading the barcodes to distinguish among seemingly identical cells, the researchers mapped the connections of more than 3,500 neurons in a mouse’s auditory cortex, a part of the brain involved in hearing. In fact, they report they’re now able to map tens of thousands of individual neurons in a mouse in a matter of weeks.

What makes BARseq even better than the team’s previous mapping approach, called MAPseq, is its ability to read the barcodes at their original location in the brain tissue [2]. As a result, they can produce maps with much finer resolution. It’s also possible to maintain other important information about each mapped neuron’s identity and function, including the expression of its genes.

Zador reports that they’re continuing to use BARseq to produce maps of other essential areas of the mouse brain with more detail than had previously been possible. Ultimately, these maps will provide a firm foundation for better understanding of human thought, consciousness, and decision-making, along with how such mental processes get altered in conditions such as autism spectrum disorder, schizophrenia, and depression.

Here’s wishing everyone a safe and happy holiday season. It’s been a fantastic year in science, and I look forward to bringing you more cool NIH-supported research in 2020!

References:

[1] High-Throughput Mapping of Long-Range Neuronal Projection Using In Situ Sequencing. Chen X, Sun YC, Zhan H, Kebschull JM, Fischer S, Matho K, Huang ZJ, Gillis J, Zador AM. Cell. 2019 Oct 17;179(3):772-786.e19.

[2] High-Throughput Mapping of Single-Neuron Projections by Sequencing of Barcoded RNA. Kebschull JM, Garcia da Silva P, Reid AP, Peikon ID, Albeanu DF, Zador AM. Neuron. 2016 Sep 7;91(5):975-987.

Links:

Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Zador Lab (Cold Spring Harbor Laboratory, Cold Spring Harbor, NY)

NIH Support: National Institute of Neurological Disorders and Stroke; National Institute on Drug Abuse; National Cancer Institute


3D Neuroscience at the Speed of Life

Posted on by

This fluorescent worm makes for much more than a mesmerizing video. It showcases a significant technological leap forward in our ability to capture in real time the firing of individual neurons in a living, freely moving animal.

As this Caenorhabditis elegans worm undulates, 113 neurons throughout its brain and body (green/yellow spots) get brighter and darker as each neuron activates and deactivates. In fact, about halfway through the video, you can see streaks tracking the positions of individual neurons (blue/purple-colored lines) from one frame to the next. Until now, it would have been technologically impossible to capture this “speed of life” with such clarity.

With funding from the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, Elizabeth Hillman at Columbia University’s Zuckerman Institute, New York, has pioneered the pairing of a 3D live-imaging microscope with an ultra-fast camera. This pairing, showcased above, is a technique called Swept Confocally Aligned Planar Excitation (SCAPE) microscopy.

Since first demonstrating SCAPE in February 2015 [1], Hillman and her team have worked hard to improve, refine, and expand the approach. Recently, they used SCAPE 1.0 to image how proprioceptive neurons in fruit-fly larvae sense body position while crawling. Now, as described in Nature Methods, they introduce SCAPE “2.0,” with boosted resolution and a much faster camera—enabling 3D imaging at speeds hundreds of times faster than conventional microscopes [2]. To track a very wiggly worm, the researchers image their target 25 times a second!

As with the first-generation SCAPE, version 2.0 uses a scanning mirror to sweep a slanted sheet of light across a sample. This same mirror redirects light coming from the illuminated plane to focus onto a stationary high-speed camera. The approach lets SCAPE grab 3D imaging at very high speeds, while also causing very little photobleaching compared to conventional point-scanning microscopes, reducing sample damage that often occurs during time-lapse microscopy.

Like SCAPE 1.0, since only a single, stationary objective lens is used, the upgraded 2.0 system doesn’t need to hold, move, or disturb a sample during imaging. This flexibility enables scientists to use SCAPE in a wide range of experiments where they can present stimuli or probe an animal’s behavior—all while imaging how the underlying cells drive and depict those behaviors.

The SCAPE 2.0 paper shows the system’s biological versatility by also recording the beating heart of a zebrafish embryo at record-breaking speeds. In addition, SCAPE 2.0 can rapidly image large fixed, cleared, and expanded tissues such as the retina, brain, and spinal cord—enabling tracing of the shape and connectivity of cellular circuits. Hillman and her team are dedicated to exporting their technology; they provide guidance and a parts list for SCAPE 2.0 so that researchers can build their own version using inexpensive off-the-shelf parts.

Watching worms wriggling around may remind us of middle-school science class. But to neuroscientists, these images represent progress toward understanding the nervous system in action, literally at the speed of life!

References:

[1] . Swept confocally-aligned planar excitation (SCAPE) microscopy for high speed volumetric imaging of behaving organisms. Bouchard MB, Voleti V, Mendes CS, Lacefield C, et al Nature Photonics. 2015;9(2):113-119.

[2] Real-time volumetric microscopy of in vivo dynamics and large-scale samples with SCAPE 2.0. Voleti V, Patel KB, Li W, Campos CP, et al. Nat Methods. 2019 Sept 27;16:1054–1062.

Links:

Using Research Organisms to Study Health and Disease (National Institute of General Medical Sciences/NIH)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Hillman Lab (Columbia University, New York)

NIH Support: National Institute of Neurological Disorders and Stroke; National Heart, Lung, and Blood Institute


Multiplex Rainbow Technology Offers New View of the Brain

Posted on by

Proteins imaged with this new approach
Caption: Confocal LNA-PRISM imaging of neuronal synapses. Conventional images of cell nuclei and two proteins (top row, three images on the left), along with 11 PRISM images of proteins and one composite, multiplexed image (bottom row, right). Credit: Adapted from Guo SM, Nature Communications, 2019

The NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative is revolutionizing our understanding of how the brain works through its creation of new imaging tools. One of the latest advances—used to produce this rainbow of images—makes it possible to view dozens of proteins in rapid succession in a single tissue sample containing thousands of neural connections, or synapses.

Apart from their colors, most of these images look nearly identical at first glance. But, upon closer inspection, you’ll see some subtle differences among them in both intensity and pattern. That’s because the images capture different proteins within the complex network of synapses—and those proteins may be present in that network in different amounts and locations. Such findings may shed light on key differences among synapses, as well as provide new clues into the roles that synaptic proteins may play in schizophrenia and various other neurological disorders.

Synapses contain hundreds of proteins that regulate the release of chemicals called neurotransmitters, which allow neurons to communicate. Each synaptic protein has its own specific job in the process. But there have been longstanding technical difficulties in observing synaptic proteins at work. Conventional fluorescence microscopy can visualize at most four proteins in a synapse.

As described in Nature Communications [1], researchers led by Mark Bathe, Massachusetts Institute of Technology (MIT), Cambridge, and Jeffrey Cottrell, Broad Institute of MIT and Harvard, Cambridge, have just upped this number considerably while delivering high quality images. They did it by adapting an existing imaging method called DNA PAINT [2]. The researchers call their adapted method PRISM. It is short for: Probe-based Imaging for Sequential Multiplexing.

Here’s how it works: First, researchers label proteins or other molecules of interest using antibodies that recognize those proteins. Those antibodies include a unique DNA probe that helps with the next important step: making the proteins visible under a microscope.

To do it, they deliver short snippets of complementary fluorescent DNA, which bind the DNA-antibody probes. While each protein of interest is imaged separately, researchers can easily wash the probes from a sample to allow a series of images to be generated, each capturing a different protein of interest.

In the original DNA PAINT, the DNA strands bind and unbind periodically to create a blinking fluorescence that can be captured using super-resolution microscopy. But that makes the process slow, requiring about half an hour for each protein.

To speed things up with PRISM, Bathe and his colleagues altered the fluorescent DNA probes. They used synthetic DNA that’s specially designed to bind more tightly or “lock” to the DNA-antibody. This gives a much brighter signal without the blinking effect. As a result, the imaging can be done faster, though at slightly lower resolution.

Though the team now captures images of 12 proteins within a sample in about an hour, this is just a start. As more DNA-antibody probes are developed for synaptic proteins, the team can readily ramp up this number to 30 protein targets.

Thanks to the BRAIN Initiative, researchers now possess a powerful new tool to study neurons. PRISM will help them learn more mechanistically about the inner workings of synapses and how they contribute to a range of neurological conditions.

References:

[1] Multiplexed and high-throughput neuronal fluorescence imaging with diffusible probes. Guo SM, Veneziano R, Gordonov S, Li L, Danielson E, Perez de Arce K, Park D, Kulesa AB, Wamhoff EC, Blainey PC, Boyden ES, Cottrell JR, Bathe M. Nat Commun. 2019 Sep 26;10(1):4377.

[2] Super-resolution microscopy with DNA-PAINT. Schnitzbauer J, Strauss MT, Schlichthaerle T, Schueder F, Jungmann R. Nat Protoc. 2017 Jun;12(6):1198-1228.

Links:

Schizophrenia (National Institute of Mental Health)

Mark Bathe (Massachusetts Institute of Technology, Cambridge)

Jeffrey Cottrell (Broad Institute of MIT and Harvard, Cambridge)

Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Institute of Mental Health; National Human Genome Research Institute; National Institute of Neurological Disorders and Stroke; National Institute of Environmental Health Sciences


The Amazing Brain: Making Up for Lost Vision

Posted on by

Recently, I’ve highlighted just a few of the many amazing advances coming out of the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative. And for our grand finale, I’d like to share a cool video that reveals how this revolutionary effort to map the human brain is opening up potential plans to help people with disabilities, such as vision loss, that were once unimaginable.

This video, produced by Jordi Chanovas and narrated by Stephen Macknik, State University of New York Downstate Health Sciences University, Brooklyn, outlines a new strategy aimed at restoring loss of central vision in people with age-related macular degeneration (AMD), a leading cause of vision loss among people age 50 and older. The researchers’ ultimate goal is to give such people the ability to see the faces of their loved ones or possibly even read again.

In the innovative approach you see here, neuroscientists aren’t even trying to repair the part of the eye destroyed by AMD: the light-sensitive retina. Instead, they are attempting to recreate the light-recording function of the retina within the brain itself.

How is that possible? Normally, the retina streams visual information continuously to the brain’s primary visual cortex, which receives the information and processes it into the vision that allows you to read these words. In folks with AMD-related vision loss, even though many cells in the center of the retina have stopped streaming, the primary visual cortex remains fully functional to receive and process visual information.

About five years ago, Macknik and his collaborator Susana Martinez-Conde, also at Downstate, wondered whether it might be possible to circumvent the eyes and stream an alternative source of visual information to the brain’s primary visual cortex, thereby restoring vision in people with AMD. They sketched out some possibilities and settled on an innovative system that they call OBServ.

Among the vital components of this experimental system are tiny, implantable neuro-prosthetic recording devices. Created in the Macknik and Martinez-Conde labs, this 1-centimeter device is powered by induction coils similar to those in the cochlear implants used to help people with profound hearing loss. The researchers propose to surgically implant two of these devices in the rear of the brain, where they will orchestrate the visual process.

For technical reasons, the restoration of central vision will likely be partial, with the window of vision spanning only about the size of one-third of an adult thumbnail held at arm’s length. But researchers think that would be enough central vision for people with AMD to regain some of their lost independence.

As demonstrated in this video from the BRAIN Initiative’s “Show Us Your Brain!” contest, here’s how researchers envision the system would ultimately work:

• A person with vision loss puts on a specially designed set of glasses. Each lens contains two cameras: one to record visual information in the person’s field of vision; the other to track that person’s eye movements enabled by residual peripheral vision.
• The eyeglass cameras wirelessly stream the visual information they have recorded to two neuro-prosthetic devices implanted in the rear of the brain.
• The neuro-prosthetic devices process and project this information onto a specific set of excitatory neurons in the brain’s hard-wired visual pathway. Researchers have previously used genetic engineering to turn these neurons into surrogate photoreceptor cells, which function much like those in the eye’s retina.
• The surrogate photoreceptor cells in the brain relay visual information to the primary visual cortex for processing.
• All the while, the neuro-prosthetic devices perform quality control of the visual signals, calibrating them to optimize their contrast and clarity.

While this might sound like the stuff of science-fiction (and this actual application still lies several years in the future), the OBServ project is now actually conceivable thanks to decades of advances in the fields of neuroscience, vision, bioengineering, and bioinformatics research. All this hard work has made the primary visual cortex, with its switchboard-like wiring system, among the brain’s best-understood regions.

OBServ also has implications that extend far beyond vision loss. This project provides hope that once other parts of the brain are fully mapped, it may be possible to design equally innovative systems to help make life easier for people with other disabilities and conditions.

Links:

Age-Related Macular Degeneration (National Eye Institute/NIH)

Macknik Lab (SUNY Downstate Health Sciences University, Brooklyn)

Martinez-Conde Laboratory (SUNY Downstate Health Sciences University)

Show Us Your Brain! (BRAIN Initiative/NIH)

Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: BRAIN Initiative


Next Page