Skip to main content

imaging

Artificial Intelligence Speeds Brain Tumor Diagnosis

Posted on by

Real time diagnostics in the operating room
Caption: Artificial intelligence speeds diagnosis of brain tumors. Top, doctor reviews digitized tumor specimen in operating room; left, the AI program predicts diagnosis; right, surgeons review results in near real-time.
Credit: Joe Hallisy, Michigan Medicine, Ann Arbor

Computers are now being trained to “see” the patterns of disease often hidden in our cells and tissues. Now comes word of yet another remarkable use of computer-generated artificial intelligence (AI): swiftly providing neurosurgeons with valuable, real-time information about what type of brain tumor is present, while the patient is still on the operating table.

This latest advance comes from an NIH-funded clinical trial of 278 patients undergoing brain surgery. The researchers found they could take a small tumor biopsy during surgery, feed it into a trained computer in the operating room, and receive a diagnosis that rivals the accuracy of an expert pathologist.

Traditionally, sending out a biopsy to an expert pathologist and getting back a diagnosis optimally takes about 40 minutes. But the computer can do it in the operating room on average in under 3 minutes. The time saved helps to inform surgeons how to proceed with their delicate surgery and make immediate and potentially life-saving treatment decisions to assist their patients.

As reported in Nature Medicine, researchers led by Daniel Orringer, NYU Langone Health, New York, and Todd Hollon, University of Michigan, Ann Arbor, took advantage of AI and another technological advance called stimulated Raman histology (SRH). The latter is an emerging clinical imaging technique that makes it possible to generate detailed images of a tissue sample without the usual processing steps.

The SRH technique starts off by bouncing laser light rapidly through a tissue sample. This light enables a nearby fiberoptic microscope to capture the cellular and structural details within the sample. Remarkably, it does so by picking up on subtle differences in the way lipids, proteins, and nucleic acids vibrate when exposed to the light.

Then, using a virtual coloring program, the microscope quickly pieces together and colors in the fine structural details, pixel by pixel. The result: a high-resolution, detailed image that you might expect from a pathology lab, minus the staining of cells, mounting of slides, and the other time-consuming processing procedures.

To interpret the SRH images, the researchers turned to computers and machine learning. To teach a computer, it must be fed large datasets of examples in order to learn how to perform a given task. In this case, they used a special class of machine learning called deep neural networks, or deep learning. It’s inspired by the way neural networks in the human brain process information.

In deep learning, computers look for patterns in large collections of data. As they begin to recognize complex relationships, some connections in the network are strengthened while others are weakened. The finished network is typically composed of multiple information-processing layers, which operate on the data to return a result, in this case a brain tumor diagnosis.

The team trained the computer to classify tissues samples into one of 13 categories commonly found in a brain tumor sample. Those categories included the most common brain tumors: malignant glioma, lymphoma, metastatic tumors, and meningioma. The training was based on more than 2.5 million labeled images representing samples from 415 patients.

Next, they put the machine to the test. The researchers split each of 278 brain tissue samples into two specimens. One was sent to a conventional pathology lab for prepping and diagnosis. The other was imaged with SRH, and then the trained machine made a diagnosis.

Overall, the machine’s performance was quite impressive, returning the right answer about 95 percent of the time. That’s compared to an accuracy of 94 percent for conventional pathology.

Interestingly, the machine made a correct diagnosis in all 17 cases that a pathologist got wrong. Likewise, the pathologist got the right answer in all 14 cases in which the machine slipped up.

The findings show that the combination of SRH and AI can be used to make real-time predictions of a patient’s brain tumor diagnosis to inform surgical decision-making. That may be especially important in places where expert neuropathologists are hard to find.

Ultimately, the researchers suggest that AI may yield even more useful information about a tumor’s underlying molecular alterations, adding ever greater precision to the diagnosis. Similar approaches are also likely to work in supplying timely information to surgeons operating on patients with other cancers too, including cancers of the skin and breast. The research team has made a brief video to give you a more detailed look at the new automated tissue-to-diagnosis pipeline.

Reference:

[1] Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Hollon TC, Pandian B, Adapa AR, Urias E, Save AV, Khalsa SSS, Eichberg DG, D’Amico RS, Farooq ZU, Lewis S, Petridis PD, Marie T, Shah AH, Garton HJL, Maher CO, Heth JA, McKean EL, Sullivan SE, Hervey-Jumper SL, Patil PG, Thompson BG, Sagher O, McKhann GM 2nd, Komotar RJ, Ivan ME, Snuderl M, Otten ML, Johnson TD, Sisti MB, Bruce JN, Muraszko KM, Trautman J, Freudiger CW, Canoll P, Lee H, Camelo-Piragua S, Orringer DA. Nat Med. 2020 Jan 6.

Links:

Video: Artificial Intelligence: Collecting Data to Maximize Potential (NIH)

New Imaging Technique Allows Quick, Automated Analysis of Brain Tumor Tissue During Surgery (National Institute of Biomedical Imaging and Bioengineering/NIH)

Daniel Orringer (NYU Langone, Perlmutter Cancer Center, New York City)

Todd Hollon (University of Michigan, Ann Arbor)

NIH Support: National Cancer Institute; National Institute of Biomedical Imaging and Bioengineering


3D Neuroscience at the Speed of Life

Posted on by

This fluorescent worm makes for much more than a mesmerizing video. It showcases a significant technological leap forward in our ability to capture in real time the firing of individual neurons in a living, freely moving animal.

As this Caenorhabditis elegans worm undulates, 113 neurons throughout its brain and body (green/yellow spots) get brighter and darker as each neuron activates and deactivates. In fact, about halfway through the video, you can see streaks tracking the positions of individual neurons (blue/purple-colored lines) from one frame to the next. Until now, it would have been technologically impossible to capture this “speed of life” with such clarity.

With funding from the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, Elizabeth Hillman at Columbia University’s Zuckerman Institute, New York, has pioneered the pairing of a 3D live-imaging microscope with an ultra-fast camera. This pairing, showcased above, is a technique called Swept Confocally Aligned Planar Excitation (SCAPE) microscopy.

Since first demonstrating SCAPE in February 2015 [1], Hillman and her team have worked hard to improve, refine, and expand the approach. Recently, they used SCAPE 1.0 to image how proprioceptive neurons in fruit-fly larvae sense body position while crawling. Now, as described in Nature Methods, they introduce SCAPE “2.0,” with boosted resolution and a much faster camera—enabling 3D imaging at speeds hundreds of times faster than conventional microscopes [2]. To track a very wiggly worm, the researchers image their target 25 times a second!

As with the first-generation SCAPE, version 2.0 uses a scanning mirror to sweep a slanted sheet of light across a sample. This same mirror redirects light coming from the illuminated plane to focus onto a stationary high-speed camera. The approach lets SCAPE grab 3D imaging at very high speeds, while also causing very little photobleaching compared to conventional point-scanning microscopes, reducing sample damage that often occurs during time-lapse microscopy.

Like SCAPE 1.0, since only a single, stationary objective lens is used, the upgraded 2.0 system doesn’t need to hold, move, or disturb a sample during imaging. This flexibility enables scientists to use SCAPE in a wide range of experiments where they can present stimuli or probe an animal’s behavior—all while imaging how the underlying cells drive and depict those behaviors.

The SCAPE 2.0 paper shows the system’s biological versatility by also recording the beating heart of a zebrafish embryo at record-breaking speeds. In addition, SCAPE 2.0 can rapidly image large fixed, cleared, and expanded tissues such as the retina, brain, and spinal cord—enabling tracing of the shape and connectivity of cellular circuits. Hillman and her team are dedicated to exporting their technology; they provide guidance and a parts list for SCAPE 2.0 so that researchers can build their own version using inexpensive off-the-shelf parts.

Watching worms wriggling around may remind us of middle-school science class. But to neuroscientists, these images represent progress toward understanding the nervous system in action, literally at the speed of life!

References:

[1] . Swept confocally-aligned planar excitation (SCAPE) microscopy for high speed volumetric imaging of behaving organisms. Bouchard MB, Voleti V, Mendes CS, Lacefield C, et al Nature Photonics. 2015;9(2):113-119.

[2] Real-time volumetric microscopy of in vivo dynamics and large-scale samples with SCAPE 2.0. Voleti V, Patel KB, Li W, Campos CP, et al. Nat Methods. 2019 Sept 27;16:1054–1062.

Links:

Using Research Organisms to Study Health and Disease (National Institute of General Medical Sciences/NIH)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Hillman Lab (Columbia University, New York)

NIH Support: National Institute of Neurological Disorders and Stroke; National Heart, Lung, and Blood Institute


Multiplex Rainbow Technology Offers New View of the Brain

Posted on by

Proteins imaged with this new approach
Caption: Confocal LNA-PRISM imaging of neuronal synapses. Conventional images of cell nuclei and two proteins (top row, three images on the left), along with 11 PRISM images of proteins and one composite, multiplexed image (bottom row, right). Credit: Adapted from Guo SM, Nature Communications, 2019

The NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative is revolutionizing our understanding of how the brain works through its creation of new imaging tools. One of the latest advances—used to produce this rainbow of images—makes it possible to view dozens of proteins in rapid succession in a single tissue sample containing thousands of neural connections, or synapses.

Apart from their colors, most of these images look nearly identical at first glance. But, upon closer inspection, you’ll see some subtle differences among them in both intensity and pattern. That’s because the images capture different proteins within the complex network of synapses—and those proteins may be present in that network in different amounts and locations. Such findings may shed light on key differences among synapses, as well as provide new clues into the roles that synaptic proteins may play in schizophrenia and various other neurological disorders.

Synapses contain hundreds of proteins that regulate the release of chemicals called neurotransmitters, which allow neurons to communicate. Each synaptic protein has its own specific job in the process. But there have been longstanding technical difficulties in observing synaptic proteins at work. Conventional fluorescence microscopy can visualize at most four proteins in a synapse.

As described in Nature Communications [1], researchers led by Mark Bathe, Massachusetts Institute of Technology (MIT), Cambridge, and Jeffrey Cottrell, Broad Institute of MIT and Harvard, Cambridge, have just upped this number considerably while delivering high quality images. They did it by adapting an existing imaging method called DNA PAINT [2]. The researchers call their adapted method PRISM. It is short for: Probe-based Imaging for Sequential Multiplexing.

Here’s how it works: First, researchers label proteins or other molecules of interest using antibodies that recognize those proteins. Those antibodies include a unique DNA probe that helps with the next important step: making the proteins visible under a microscope.

To do it, they deliver short snippets of complementary fluorescent DNA, which bind the DNA-antibody probes. While each protein of interest is imaged separately, researchers can easily wash the probes from a sample to allow a series of images to be generated, each capturing a different protein of interest.

In the original DNA PAINT, the DNA strands bind and unbind periodically to create a blinking fluorescence that can be captured using super-resolution microscopy. But that makes the process slow, requiring about half an hour for each protein.

To speed things up with PRISM, Bathe and his colleagues altered the fluorescent DNA probes. They used synthetic DNA that’s specially designed to bind more tightly or “lock” to the DNA-antibody. This gives a much brighter signal without the blinking effect. As a result, the imaging can be done faster, though at slightly lower resolution.

Though the team now captures images of 12 proteins within a sample in about an hour, this is just a start. As more DNA-antibody probes are developed for synaptic proteins, the team can readily ramp up this number to 30 protein targets.

Thanks to the BRAIN Initiative, researchers now possess a powerful new tool to study neurons. PRISM will help them learn more mechanistically about the inner workings of synapses and how they contribute to a range of neurological conditions.

References:

[1] Multiplexed and high-throughput neuronal fluorescence imaging with diffusible probes. Guo SM, Veneziano R, Gordonov S, Li L, Danielson E, Perez de Arce K, Park D, Kulesa AB, Wamhoff EC, Blainey PC, Boyden ES, Cottrell JR, Bathe M. Nat Commun. 2019 Sep 26;10(1):4377.

[2] Super-resolution microscopy with DNA-PAINT. Schnitzbauer J, Strauss MT, Schlichthaerle T, Schueder F, Jungmann R. Nat Protoc. 2017 Jun;12(6):1198-1228.

Links:

Schizophrenia (National Institute of Mental Health)

Mark Bathe (Massachusetts Institute of Technology, Cambridge)

Jeffrey Cottrell (Broad Institute of MIT and Harvard, Cambridge)

Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Institute of Mental Health; National Human Genome Research Institute; National Institute of Neurological Disorders and Stroke; National Institute of Environmental Health Sciences


The Amazing Brain: Shining a Spotlight on Individual Neurons

Posted on by

A major aim of the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative is to develop new technologies that allow us to look at the brain in many different ways on many different scales. So, I’m especially pleased to highlight this winner of the initiative’s recent “Show Us Your Brain!” contest.

Here you get a close-up look at pyramidal neurons located in the hippocampus, a region of the mammalian brain involved in memory. While this tiny sample of mouse brain is densely packed with many pyramidal neurons, researchers used new ExLLSM technology to zero in on just three. This super-resolution, 3D view reveals the intricacies of each cell’s structure and branching patterns.

The group that created this award-winning visual includes the labs of X. William Yang at the University of California, Los Angeles, and Kwanghun Chung at the Massachusetts Institute of Technology, Cambridge. Chung’s team also produced another quite different “Show Us Your Brain!” winner, a colorful video featuring hundreds of neural cells and connections in a part of the brain essential to movement.

Pyramidal neurons in the hippocampus come in many different varieties. Some important differences in their functional roles may be related to differences in their physical shapes, in ways that aren’t yet well understood. So, BRAIN-supported researchers are now applying a variety of new tools and approaches in a more detailed effort to identify and characterize these neurons and their subtypes.

The video featured here took advantage of Chung’s new method for preserving brain tissue samples [1]. Another secret to its powerful imagery was a novel suite of mouse models developed in the Yang lab. With some sophisticated genetics, these models make it possible to label, at random, just 1 to 5 percent of a given neuronal cell type, illuminating their full morphology in the brain [2]. The result was this unprecedented view of three pyramidal neurons in exquisite 3D detail.

Ultimately, the goal of these and other BRAIN Initiative researchers is to produce a dynamic picture of the brain that, for the first time, shows how individual cells and complex neural circuits interact in both time and space. I look forward to their continued progress, which promises to revolutionize our understanding of how the human brain functions in both health and disease.

References:

[1] Protection of tissue physicochemical properties using polyfunctional crosslinkers. Park YG, Sohn CH, Chen R, McCue M, Yun DH, Drummond GT, Ku T, Evans NB, Oak HC, Trieu W, Choi H, Jin X, Lilascharoen V, Wang J, Truttmann MC, Qi HW, Ploegh HL, Golub TR, Chen SC, Frosch MP, Kulik HJ, Lim BK, Chung K. Nat Biotechnol. 2018 Dec 17.

[2] Genetically-directed Sparse Neuronal Labeling in BAC Transgenic Mice through Mononucleotide Repeat Frameshift. Lu XH, Yang XW. Sci Rep. 2017 Mar 8;7:43915.

Links:

Chung Lab (Massachusetts Institute of Technology, Cambridge)

Yang Lab (University of California, Los Angeles)

Show Us Your Brain! (BRAIN Initiative/NIH)

Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Institute of Mental Health; National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering


Singing for the Fences

Posted on by

Credit: NIH

I’ve sung thousands of songs in my life, mostly in the forgiving company of family and friends. But, until a few years ago, I’d never dreamed that I would have the opportunity to do a solo performance of the Star-Spangled Banner in a major league ballpark.

When I first learned that the Washington Nationals had selected me to sing the national anthem before a home game with the New York Mets on May 24, 2016, I was thrilled. But then another response emerged: yes, that would be called fear. Not only would I be singing before my biggest audience ever, I would be taking on a song that’s extremely challenging for even the most accomplished performer.

The musician in me was particularly concerned about landing the anthem’s tricky high F note on “land of the free” without screeching or going flat. So, I tracked down a voice teacher who gave me a crash course about how to breathe properly, how to project, how to stay on pitch on a high note, and how to hit the national anthem out of the park. She suggested that a good way to train is to sing the entire song with each syllable replaced by “meow.” It sounds ridiculous, but it helped—try it sometime. And then I practiced, practiced, practiced. I think the preparation paid off, but watch the video to decide for yourself!

Three years later, the scientist in me remains fascinated by what goes on in the human brain when we listen to or perform music. The NIH has even partnered with the John F. Kennedy Center for the Performing Arts to launch the Sound Health initiative to explore the role of music in health. A great many questions remain to be answered. For example, what is it that makes us enjoy singers who stay on pitch and cringe when we hear someone go sharp or flat? Why do some intervals sound pleasant and others sound grating? And, to push that line of inquiry even further, why do we tune into the pitch of people’s voices when they are speaking to help figure out if they are happy, sad, angry, and so on?

To understand more about the neuroscience of pitch, a research team, led by Bevil Conway of NIH’s National Eye Institute, used functional MRI imaging to study activity in the region of the brain involved in processing sound (the auditory cortex), both in humans and in our evolutionary relative, the macaque monkey [1]. For purposes of the study, published recently in Nature Neuroscience, pitch was defined as the harmonic sounds that we hear when listening to music.

For humans and macaques, their auditory cortices lit up comparably in response to low- and high-frequency sound. But only humans responded selectively to harmonic tones, while the macaques reacted to toneless, white noise sounds spanning the same frequency range. Based on what they found in both humans and monkeys, the researchers suspect that macaques experience music and other sounds differently than humans. They also go on to suggest that the perception of pitch must have provided some kind of evolutionary advantage for our ancestors, and has therefore apparently shaped the basic organization of the human brain.

But enough about science and back to the ballpark! In front of 33,009 pitch-sensitive Homo sapiens, I managed to sing our national anthem without audible groaning from the crowd. What an honor it was! I pass along this memory to encourage each of you to test your own pitch this Independence Day. Let’s all celebrate the birth of our great nation. Have a happy Fourth!

Reference:

[1] Divergence in the functional organization of human and macaque auditory cortex revealed by fMRI responses to harmonic tones. Norman-Haignere SV, Kanwisher N, McDermott JH, Conway BR. Nat Neurosci. 2019 Jun 10. [Epub ahead of print]

Links:

Our brains appear uniquely tuned for musical pitch (National Institute of Neurological Diseases and Stroke news release)

Sound Health: An NIH-Kennedy Center Partnership (NIH)

Bevil Conway (National Eye Institute/NIH)

NIH Support: National Institute of Neurological Diseases and Stroke; National Eye Institute; National Institute of Mental Health


Next Page