Skip to main content

neuroscience

A Real-Time Look at Value-Based Decision Making

Posted on by

All of us make many decisions every day. For most things, such as which jacket to wear or where to grab a cup of coffee, there’s usually no right answer, so we often decide using values rooted in our past experiences. Now, neuroscientists have identified the part of the mammalian brain that stores information essential to such value-based decision making.

Researchers zeroed in on this particular brain region, known as the retrosplenial cortex (RSC), by analyzing movies—including the clip shown about 32 seconds into this video—that captured in real time what goes on in the brains of mice as they make decisions. Each white circle is a neuron, and the flickers of light reflect their activity: the brighter the light, the more active the neuron at that point in time.

All told, the NIH-funded team, led by Ryoma Hattori and Takaki Komiyama, University of California at San Diego, La Jolla, made recordings of more than 45,000 neurons across six regions of the mouse brain [1]. Neural activity isn’t usually visible. But, in this case, researchers used mice that had been genetically engineered so that their neurons, when activated, expressed a protein that glowed.

Their system was also set up to encourage the mice to make value-based decisions, including choosing between two drinking tubes, each with a different probability of delivering water. During this decision-making process, the RSC proved to be the region of the brain where neurons persistently lit up, reflecting how the mouse evaluated one option over the other.

The new discovery, described in the journal Cell, comes as something of a surprise to neuroscientists because the RSC hadn’t previously been implicated in value-based decisions. To gather additional evidence, the researchers turned to optogenetics, a technique that enabled them to use light to inactivate neurons in the RSC’s of living animals. These studies confirmed that, with the RSC turned off, the mice couldn’t retrieve value information based on past experience.

The researchers note that the RSC is heavily interconnected with other key brain regions, including those involved in learning, memory, and controlling movement. This indicates that the RSC may be well situated to serve as a hub for storing value information, allowing it to be accessed and acted upon when it is needed.

The findings are yet another amazing example of how advances coming out of the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative are revolutionizing our understanding of the brain. In the future, the team hopes to learn more about how the RSC stores this information and sends it to other parts of the brain. They note that it will also be important to explore how activity in this brain area may be altered in schizophrenia, dementia, substance abuse, and other conditions that may affect decision-making abilities. It will also be interesting to see how this develops during childhood and adolescence.

Reference:

[1] Area-Specificity and Plasticity of History-Dependent Value Coding During Learning. Hattori R, Danskin B, Babic Z, Mlynaryk N, Komiyama T. Cell. 2019 Jun 13;177(7):1858-1872.e15.

Links:

Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Komiyama Lab (UCSD, La Jolla)

NIH Support: National Institute of Neurological Disorders and Stroke; National Eye Institute; National Institute on Deafness and Other Communication Disorders


Artificial Intelligence Speeds Brain Tumor Diagnosis

Posted on by

Real time diagnostics in the operating room
Caption: Artificial intelligence speeds diagnosis of brain tumors. Top, doctor reviews digitized tumor specimen in operating room; left, the AI program predicts diagnosis; right, surgeons review results in near real-time.
Credit: Joe Hallisy, Michigan Medicine, Ann Arbor

Computers are now being trained to “see” the patterns of disease often hidden in our cells and tissues. Now comes word of yet another remarkable use of computer-generated artificial intelligence (AI): swiftly providing neurosurgeons with valuable, real-time information about what type of brain tumor is present, while the patient is still on the operating table.

This latest advance comes from an NIH-funded clinical trial of 278 patients undergoing brain surgery. The researchers found they could take a small tumor biopsy during surgery, feed it into a trained computer in the operating room, and receive a diagnosis that rivals the accuracy of an expert pathologist.

Traditionally, sending out a biopsy to an expert pathologist and getting back a diagnosis optimally takes about 40 minutes. But the computer can do it in the operating room on average in under 3 minutes. The time saved helps to inform surgeons how to proceed with their delicate surgery and make immediate and potentially life-saving treatment decisions to assist their patients.

As reported in Nature Medicine, researchers led by Daniel Orringer, NYU Langone Health, New York, and Todd Hollon, University of Michigan, Ann Arbor, took advantage of AI and another technological advance called stimulated Raman histology (SRH). The latter is an emerging clinical imaging technique that makes it possible to generate detailed images of a tissue sample without the usual processing steps.

The SRH technique starts off by bouncing laser light rapidly through a tissue sample. This light enables a nearby fiberoptic microscope to capture the cellular and structural details within the sample. Remarkably, it does so by picking up on subtle differences in the way lipids, proteins, and nucleic acids vibrate when exposed to the light.

Then, using a virtual coloring program, the microscope quickly pieces together and colors in the fine structural details, pixel by pixel. The result: a high-resolution, detailed image that you might expect from a pathology lab, minus the staining of cells, mounting of slides, and the other time-consuming processing procedures.

To interpret the SRH images, the researchers turned to computers and machine learning. To teach a computer, it must be fed large datasets of examples in order to learn how to perform a given task. In this case, they used a special class of machine learning called deep neural networks, or deep learning. It’s inspired by the way neural networks in the human brain process information.

In deep learning, computers look for patterns in large collections of data. As they begin to recognize complex relationships, some connections in the network are strengthened while others are weakened. The finished network is typically composed of multiple information-processing layers, which operate on the data to return a result, in this case a brain tumor diagnosis.

The team trained the computer to classify tissues samples into one of 13 categories commonly found in a brain tumor sample. Those categories included the most common brain tumors: malignant glioma, lymphoma, metastatic tumors, and meningioma. The training was based on more than 2.5 million labeled images representing samples from 415 patients.

Next, they put the machine to the test. The researchers split each of 278 brain tissue samples into two specimens. One was sent to a conventional pathology lab for prepping and diagnosis. The other was imaged with SRH, and then the trained machine made a diagnosis.

Overall, the machine’s performance was quite impressive, returning the right answer about 95 percent of the time. That’s compared to an accuracy of 94 percent for conventional pathology.

Interestingly, the machine made a correct diagnosis in all 17 cases that a pathologist got wrong. Likewise, the pathologist got the right answer in all 14 cases in which the machine slipped up.

The findings show that the combination of SRH and AI can be used to make real-time predictions of a patient’s brain tumor diagnosis to inform surgical decision-making. That may be especially important in places where expert neuropathologists are hard to find.

Ultimately, the researchers suggest that AI may yield even more useful information about a tumor’s underlying molecular alterations, adding ever greater precision to the diagnosis. Similar approaches are also likely to work in supplying timely information to surgeons operating on patients with other cancers too, including cancers of the skin and breast. The research team has made a brief video to give you a more detailed look at the new automated tissue-to-diagnosis pipeline.

Reference:

[1] Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Hollon TC, Pandian B, Adapa AR, Urias E, Save AV, Khalsa SSS, Eichberg DG, D’Amico RS, Farooq ZU, Lewis S, Petridis PD, Marie T, Shah AH, Garton HJL, Maher CO, Heth JA, McKean EL, Sullivan SE, Hervey-Jumper SL, Patil PG, Thompson BG, Sagher O, McKhann GM 2nd, Komotar RJ, Ivan ME, Snuderl M, Otten ML, Johnson TD, Sisti MB, Bruce JN, Muraszko KM, Trautman J, Freudiger CW, Canoll P, Lee H, Camelo-Piragua S, Orringer DA. Nat Med. 2020 Jan 6.

Links:

Video: Artificial Intelligence: Collecting Data to Maximize Potential (NIH)

New Imaging Technique Allows Quick, Automated Analysis of Brain Tumor Tissue During Surgery (National Institute of Biomedical Imaging and Bioengineering/NIH)

Daniel Orringer (NYU Langone, Perlmutter Cancer Center, New York City)

Todd Hollon (University of Michigan, Ann Arbor)

NIH Support: National Cancer Institute; National Institute of Biomedical Imaging and Bioengineering


Seeing the Cytoskeleton in a Whole New Light

Posted on by

It’s been 25 years since researchers coaxed a bacterium to synthesize an unusual jellyfish protein that fluoresced bright green when irradiated with blue light. Within months, another group had also fused this small green fluorescent protein (GFP) to larger proteins to make their whereabouts inside the cell come to light—like never before.

To mark the anniversary of this Nobel Prize-winning work and show off the rainbow of color that is now being used to illuminate the inner workings of the cell, the American Society for Cell Biology (ASCB) recently held its Green Fluorescent Protein Image and Video Contest. Over the next few months, my blog will feature some of the most eye-catching entries—starting with this video that will remind those who grew up in the 1980s of those plasma balls that, when touched, light up with a simulated bolt of colorful lightning.

This video, which took third place in the ASCB contest, shows the cytoskeleton of a frequently studied human breast cancer cell line. The cytoskeleton is made from protein structures called microtubules, made visible by fluorescently tagging a protein called doublecortin (orange). Filaments of another protein called actin (purple) are seen here as the fine meshwork in the cell periphery.

The cytoskeleton plays an important role in giving cells shape and structure. But it also allows a cell to move and divide. Indeed, the motion in this video shows that the complex network of cytoskeletal components is constantly being organized and reorganized in ways that researchers are still working hard to understand.

Jeffrey van Haren, Erasmus University Medical Center, Rotterdam, the Netherlands, shot this video using the tools of fluorescence microscopy when he was a postdoctoral researcher in the NIH-funded lab of Torsten Wittman, University of California, San Francisco.

All good movies have unusual plot twists, and that’s truly the case here. Though the researchers are using a breast cancer cell line, their primary interest is in the doublecortin protein, which is normally found in association with microtubules in the developing brain. In fact, in people with mutations in the gene that encodes this protein, neurons fail to migrate properly during development. The resulting condition, called lissencephaly, leads to epilepsy, cognitive disability, and other neurological problems.

Cancer cells don’t usually express doublecortin. But, in some of their initial studies, the Wittman team thought it would be much easier to visualize and study doublecortin in the cancer cells. And so, the researchers tagged doublecortin with an orange fluorescent protein, engineered its expression in the breast cancer cells, and van Haren started taking pictures.

This movie and others helped lead to the intriguing discovery that doublecortin binds to microtubules in some places and not others [1]. It appears to do so based on the ability to recognize and bind to certain microtubule geometries. The researchers have since moved on to studies in cultured neurons.

This video is certainly a good example of the illuminating power of fluorescent proteins: enabling us to see cells and their cytoskeletons as incredibly dynamic, constantly moving entities. And, if you’d like to see much more where this came from, consider visiting van Haren’s Twitter gallery of microtubule videos here:

Reference:

[1] Doublecortin is excluded from growing microtubule ends and recognizes the GDP-microtubule lattice. Ettinger A, van Haren J, Ribeiro SA, Wittmann T. Curr Biol. 2016 Jun 20;26(12):1549-1555.

Links:

Lissencephaly Information Page (National Institute of Neurological Disorders and Stroke/NIH)

Wittman Lab (University of California, San Francisco)

Green Fluorescent Protein Image and Video Contest (American Society for Cell Biology, Bethesda, MD)

NIH Support: National Institute of General Medical Sciences


3D Neuroscience at the Speed of Life

Posted on by

This fluorescent worm makes for much more than a mesmerizing video. It showcases a significant technological leap forward in our ability to capture in real time the firing of individual neurons in a living, freely moving animal.

As this Caenorhabditis elegans worm undulates, 113 neurons throughout its brain and body (green/yellow spots) get brighter and darker as each neuron activates and deactivates. In fact, about halfway through the video, you can see streaks tracking the positions of individual neurons (blue/purple-colored lines) from one frame to the next. Until now, it would have been technologically impossible to capture this “speed of life” with such clarity.

With funding from the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, Elizabeth Hillman at Columbia University’s Zuckerman Institute, New York, has pioneered the pairing of a 3D live-imaging microscope with an ultra-fast camera. This pairing, showcased above, is a technique called Swept Confocally Aligned Planar Excitation (SCAPE) microscopy.

Since first demonstrating SCAPE in February 2015 [1], Hillman and her team have worked hard to improve, refine, and expand the approach. Recently, they used SCAPE 1.0 to image how proprioceptive neurons in fruit-fly larvae sense body position while crawling. Now, as described in Nature Methods, they introduce SCAPE “2.0,” with boosted resolution and a much faster camera—enabling 3D imaging at speeds hundreds of times faster than conventional microscopes [2]. To track a very wiggly worm, the researchers image their target 25 times a second!

As with the first-generation SCAPE, version 2.0 uses a scanning mirror to sweep a slanted sheet of light across a sample. This same mirror redirects light coming from the illuminated plane to focus onto a stationary high-speed camera. The approach lets SCAPE grab 3D imaging at very high speeds, while also causing very little photobleaching compared to conventional point-scanning microscopes, reducing sample damage that often occurs during time-lapse microscopy.

Like SCAPE 1.0, since only a single, stationary objective lens is used, the upgraded 2.0 system doesn’t need to hold, move, or disturb a sample during imaging. This flexibility enables scientists to use SCAPE in a wide range of experiments where they can present stimuli or probe an animal’s behavior—all while imaging how the underlying cells drive and depict those behaviors.

The SCAPE 2.0 paper shows the system’s biological versatility by also recording the beating heart of a zebrafish embryo at record-breaking speeds. In addition, SCAPE 2.0 can rapidly image large fixed, cleared, and expanded tissues such as the retina, brain, and spinal cord—enabling tracing of the shape and connectivity of cellular circuits. Hillman and her team are dedicated to exporting their technology; they provide guidance and a parts list for SCAPE 2.0 so that researchers can build their own version using inexpensive off-the-shelf parts.

Watching worms wriggling around may remind us of middle-school science class. But to neuroscientists, these images represent progress toward understanding the nervous system in action, literally at the speed of life!

References:

[1] . Swept confocally-aligned planar excitation (SCAPE) microscopy for high speed volumetric imaging of behaving organisms. Bouchard MB, Voleti V, Mendes CS, Lacefield C, et al Nature Photonics. 2015;9(2):113-119.

[2] Real-time volumetric microscopy of in vivo dynamics and large-scale samples with SCAPE 2.0. Voleti V, Patel KB, Li W, Campos CP, et al. Nat Methods. 2019 Sept 27;16:1054–1062.

Links:

Using Research Organisms to Study Health and Disease (National Institute of General Medical Sciences/NIH)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Hillman Lab (Columbia University, New York)

NIH Support: National Institute of Neurological Disorders and Stroke; National Heart, Lung, and Blood Institute


Exploring the Universality of Human Song

Posted on by

Analysis of Music-Internationally

It’s often said that music is a universal language. But is it really universal? Some argue that humans are just too culturally complex and their music is far too varied to expect any foundational similarity. Yet some NIH-funded researchers recently decided to take on the challenge, using the tools of computational social science to analyze recordings of human songs and other types of data gathered from more than 300 societies around the globe.

In a study published in the journal Science [1], the researchers conclude that music is indeed universal. Their analyses showed that all of the cultures studied used song in four similar behavioral contexts: dance, love, healing, and infant care. What’s more, no matter where in the world one goes, songs used in each of those ways were found to share certain musical features, including tone, pitch, and rhythm.

As exciting as the new findings may be for those who love music (like me), the implications may extend far beyond music itself. The work may help to shed new light on the complexities of the human brain, as well as inform efforts to enhance the role of music in improving human health. The healing power of music is a major focus of the NIH-supported Sound Health Initiative.

Samuel Mehr, a researcher at Harvard University, Cambridge, MA, led this latest study, funded in part by an NIH Director’s Early Independence Award. His multi-disciplinary team included anthropologists Manvir Singh, Harvard, and Luke Glowacki, Penn State University, State College; computational linguist Timothy O’Donnell, McGill University, Montreal, Canada; and political scientists Dean Knox, Princeton University, Princeton, NJ, and Christopher Lucas, Washington University, St. Louis.

In work published last year [2], Mehr’s team found that untrained listeners in 60 countries could on average discern the human behavior associated with culturally unfamiliar musical forms. These behaviors included dancing, soothing a baby, seeking to heal illness, or expressing love to another person.

In the latest study, the team took these initial insights and applied them more broadly to the universality of music. They started with the basic question: Do all human societies make music?

To find the answer, the team accessed Yale University’s Human Relations Area Files, an internationally recognized database for cultural anthropologists. This rich resource contains high-quality data for 319 mostly tribal societies across the planet, allowing the researchers to search archival information for mentions of music. Their search pulled up music tags for 309 societies. Digging deeper in other historical records not in the database, the team confirmed that the remaining six societies did indeed make music.

The researchers propose that these 319 societies provide a representative cross section of humanity. They thus conclude that it is statistically probable that music is in fact found in all human societies.

What exactly is so universal about music? To begin answering this complex question, the researchers tapped into more than a century of musicology to build a vast, multi-faceted database that they call the Natural History of Song (NHS).

Drawing from the NHS database, the researchers focused on nearly 5,000 vocally performed songs from 60 carefully selected human societies on all continents. By statistically analyzing those musical descriptions, they found that the behaviors associated with songs vary along three dimensions, which the researchers refer to as formality, arousal, and religiosity.

When the researchers mapped the four types of songs from their earlier study—love, dance, lullaby, and healing—onto these dimensions, they found that songs used in similar behavioral contexts around the world clustered together. For instance, across human societies, dance songs tend to appear in more formal contexts with large numbers of people. They also tend to be upbeat and energetic and don’t usually appear as part of religious ceremonies. In contrast, love songs tend to be more informal and less energetic.

Interestingly, the team also replicated its previous study in a citizen-science experiment with nearly 30,000 participants living in over 100 countries worldwide. They found again that listeners could tell what kinds of songs they were listening to, even when those songs came from faraway places. They went on to show that certain acoustic features of songs, like tempo, melody, and pitch, help to predict a song’s primary behavioral function across societies.

In many musical styles, melodies are composed of a fixed set of distinct tones organized around a tonal center (sometimes called the “tonic,” it’s the “do” in “do-re-mi”). For instance, the researchers explain, the tonal center of “Row Your Boat” is found in each “row” as well as the last “merrily,” and the final “dream.”

Their analyses show that songs with such basic tonal melodies are widespread and perhaps even universal. This suggests that tonality could be a means to delve even deeper into the natural history of world music and other associated behaviors, such as play, mourning, and fighting.

While some aspects of music may be universal, others are quite diverse. That’s particularly true within societies, where people may express different psychological states in song to capture their views of their culture. In fact, Mehr’s team found that the musical variation within a typical society is six times greater for that reason than the musical diversity across societies.

Following up on this work, Mehr’s team is now recruiting families with young infants for a study to understand how they respond to their varied collection of songs. Meanwhile, through the Sound Health Initiative, other research teams around the country are exploring many other ways in which listening to and creating music may influence and improve our health. As a scientist and amateur musician, I couldn’t be more excited to take part in this exceptional time of discovery at the intersection of health, neuroscience, and music.

References:

[1] Universality and diversity in human song. Mehr SA, Singh M, Knox D, Ketter DM, Pickens-Jones D, Atwood S, Lucas C, Jacoby N, Egner AA, Hopkins EJ, Howard RM, Hartshorne JK, Jennings MV, Simson J, Bainbridge CM, Pinker S, O’Donnell TJ, Krasnow MM, Glowacki L. Science. 2019 Nov 22;366(6468).

[2] Form and function in human song. Mehr SA, Singh M, York H, Glowacki L, Krasnow MM. Curr Biol. 2018 Feb 5;28(3):356-368.e5.

Links:

Sound Health Initiative (NIH)

Video: Music and the Mind—A Q & A with Renée Fleming & Francis Collins (YouTube)

The Music Lab (Harvard University, Cambridge, MA)

Samuel Mehr (Harvard)

NIH Director’s Early Independence Award (Common Fund)

NIH Support: Common Fund


Next Page