Skip to main content

Cool Videos

Finding Beauty in the Nervous System of a Fruit Fly Larva

Posted on by Dr. Francis Collins

Wow! Click on the video. If you’ve ever wondered where those pesky flies in your fruit bowl come from, you’re looking at it right now. It’s a fruit fly larva. And this 3D movie offers never-before-seen details into proprioception—the brain’s sixth sense of knowing the body’s location relative to nearby objects or, in this case, fruit.

This live-action video highlights the movement of the young fly’s proprioceptive nerve cells. They send signals to the fly brain that are essential for tracking the body’s position in space and coordinating movement. The colors indicate the depth of the nerve cells inside the body, showing those at the surface (orange) and those further within (blue).

Such movies make it possible, for the first time, to record precisely how every one of these sensory cells is arranged within the body. They also provide a unique window into how body positions are dynamically encoded in these cells, as a segmented larva inches along in search of food.

The video was created using a form of confocal microscopy called Swept Confocally Aligned Planar Excitation, or SCAPE. It captures 3D images by sweeping a sheet of laser light back and forth across a living sample. Even better, it does this while the microscope remains completely stationary—no need for a researcher to move any lenses up or down, or hold a live sample still.

Most impressively, with this new high-speed technology, developed with support from the NIH’s BRAIN Initiative, researchers are now able to capture videos like the one seen above in record time, with each whole volume recorded in under 1/10th of a second! That’s hundreds of times faster than with a conventional microscope, which scans objects point by point.

As reported in Current Biology, the team, led by Elizabeth Hillman and Wesley Grueber, Columbia University, New York, didn’t stop at characterizing the structural details and physical movements of nerve cells involved in proprioception in a crawling larva. In another set of imaging experiments, they went a step further, capturing faint flashes of green in individual labeled nerve cells each time they fired. (You have to look very closely to see them.) With each wave of motion, proprioceptive nerve cells light up in sequence, demonstrating precisely when they are sending signals to the animal’s brain.

From such videos, the researchers have generated a huge amount of data on the position and activity of each proprioceptive nerve cell. The data show that the specific position of each cell makes it uniquely sensitive to changes in position of particular segments of a larva’s body. While most of the proprioceptive nerve cells fired when their respective body segment contracted, others were attuned to fire when a larval segment stretched.

Taken together, the data show that proprioceptive nerve cells provide the brain with a detailed sequence of signals, reflecting each part of a young fly’s undulating body. It’s clear that every proprioceptive neuron has a unique role to play in the process. The researchers now will create similar movies capturing neurons in the fly’s central nervous system.

A holy grail of the BRAIN Initiative is to capture the brain in action. With these advances in imaging larval flies, researchers are getting ever closer to understanding the coordinated activities of an organism’s complete nervous system—though this one is a lot simpler than ours! And perhaps this movie—and the anticipation of the sequels to come—may even inspire a newfound appreciation for those pesky flies that sometimes hover nearby.

Reference:

[1] Characterization of Proprioceptive System Dynamics in Behaving Drosophila Larvae Using High-Speed Volumetric Microscopy. Vaadia RD, Li W, Voleti V, Singhania A, Hillman EMC, Grueber WB. Curr Biol. 2019 Mar 18;29(6):935-944.e4.

Links:

Using Research Organisms to Study Health and Disease (National Institute of General Medical Sciences/NIH)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Hillman Lab (Columbia University, New York)

Grueber Lab (Columbia University, New York)

NIH Support: National Institute of Neurological Disorders and Stroke; Eunice Kennedy Shriver National Institute of Child Health and Human Development


World’s Smallest Tic-Tac-Toe Game Built from DNA

Posted on by Dr. Francis Collins

Check out the world’s smallest board game, a nanoscale match of tic-tac-toe being played out in a test tube with X’s and O’s made of DNA. But the innovative approach you see demonstrated in this video is much more than fun and games. Ultimately, researchers hope to use this technology to build tiny DNA machines for a wide variety of biomedical applications.

Here’s how it works. By combining two relatively recent technologies, an NIH-funded team led by Lulu Qian, California Institute of Technology, Pasadena, CA, created a “swapping mechanism” that programs dynamic interactions between complex DNA nanostructures [1]. The approach takes advantage of DNA’s modular structure, along with its tendency to self-assemble, based on the ability of the four letters of DNA’s chemical alphabet to pair up in an orderly fashion, A to T and C to G.

To make each of the X or O tiles in this game (displayed here in an animated cartoon version), researchers started with a single, long strand of DNA and many much shorter strands, called staples. When the sequence of DNA letters in each of those components is arranged just right, the longer strand will fold up into the desired 2D or 3D shape. This technique is called DNA origami because of its similarity to the ancient art of Japanese paper folding.

In the early days of DNA origami, researchers showed the technique could be used to produce miniature 2D images, such as a smiley face [2]. Last year, the Caltech group got more sophisticated—using DNA origami to produce the world’s smallest reproduction of the Mona Lisa [3].

In the latest work, published in Nature Communications, Qian, Philip Petersen and Grigory Tikhomirov first mixed up a solution of nine blank DNA origami tiles in a test tube. Those DNA tiles assembled themselves into a tic-tac-toe grid. Next, two players took turns adding one of nine X or O DNA tiles into the solution. Each of the game pieces was programmed precisely to swap out only one of the tile positions on the original, blank grid, based on the DNA sequences positioned along its edges.

When the first match was over, player X had won! More importantly for future biomedical applications, the original, blank grid had been fully reconfigured into a new structure, built of all-new, DNA-constructed components. That achievement shows not only can researchers use DNA to build miniature objects, they can also use DNA to repair or reconfigure such objects.

Of course, the ultimate aim of this research isn’t to build games or reproduce famous works of art. Qian wants to see her DNA techniques used to produce tiny automated machines, capable of performing basic tasks on a molecular scale. In fact, her team already has used a similar approach to build nano-sized DNA robots, programmed to sort molecules in much the same way that a person might sort laundry [4]. Such robots may prove useful in miniaturized approaches to drug discovery, development, manufacture, and/or delivery.

Another goal of the Caltech team is to demonstrate to the scientific community what’s possible with this leading-edge technology, in hopes that other researchers will pick up their innovative tools for their own applications. That would be a win-win for us all.

References:

[1] Information-based autonomous reconfiguration in systems of DNA nanostructures. Petersen P, Tikhomirov G, Qian L. Nat Commun. 2018 Dec 18;9(1):5362

[2] Folding DNA to create nanoscale shapes and patterns. Rothemund PW. Nature. 2006 Mar 16;440(7082):297-302.

[3] Fractal assembly of micrometre-scale DNA origami arrays with arbitrary patterns. Tikhomirov G, Petersen P, Qian L. Nature. 2017 Dec 6;552(7683):67-71.

[4] A cargo-sorting DNA robot. Thubagere AJ, Li W, Johnson RF, Chen Z, Doroudi S, Lee YL, Izatt G, Wittman S, Srinivas N, Woods D, Winfree E, Qian L. Science. 2017 Sep 15;357(6356).

Links:

Paul Rothemund—DNA Origami: Folded DNA as a Building Material for Molecular Devices (Cal Tech, Pasadena)

The World’s Smallest Mona Lisa (Caltech)

Qian Lab (Caltech, Pasadena, CA)

NIH Support: National Institute of General Medical Sciences


Taking Brain Imaging Even Deeper

Posted on by Dr. Francis Collins

Thanks to yet another amazing advance made possible by the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, I can now take you on a 3D fly-through of all six layers of the part of the mammalian brain that processes external signals into vision. This unprecedented view is made possible by three-photon microscopy, a low-energy imaging approach that is allowing researchers to peer deeply within the brains of living creatures without damaging or killing their brain cells.

The basic idea of multi-photon microscopy is this: for fluorescence microscopy to work, you want to deliver a specific energy level of photons (usually with a laser) to excite a fluorescent molecule, so that it will emit light at a slightly lower energy (longer wavelength) and be visualized as a burst of colored light in the microscope. That’s how fluorescence works. Green fluorescent protein (GFP) is one of many proteins that can be engineered into cells or mice to make that possible.

But for that version of the approach to work on tissue, the excited photons need to penetrate deeply, and that’s not possible for such high energy photons. So two-photon strategies were developed, where it takes the sum of the energy of two simultaneous photons to hit the target in order to activate the fluorophore.

That approach has made a big difference, but for deep tissue penetration the photons are still too high in energy. Enter the three-photon version! Now the even lower energy of the photons makes tissue more optically transparent, though for activation of the fluorescent protein, three photons have to hit it simultaneously. But that’s part of the beauty of the system—the visual “noise” also goes down.

This particular video shows what takes place in the visual cortex of mice when objects pass before their eyes. As the objects appear, specific neurons (green) are activated to process the incoming information. Nearby, and slightly obscuring the view, are the blood vessels (pink, violet) that nourish the brain. At 33 seconds into the video, you can see the neurons’ myelin sheaths (pink) branching into the white matter of the brain’s subplate, which plays a key role in organizing the visual cortex during development.

This video comes from a recent paper in Nature Communications by a team from Massachusetts Institute of Technology, Cambridge [1]. To obtain this pioneering view of the brain, Mriganka Sur, Murat Yildirim, and their colleagues built an innovative microscope that emits three low-energy photons. After carefully optimizing the system, they were able to peer more than 1,000 microns (0.05 inches) deep into the visual cortex of a live, alert mouse, far surpassing the imaging capacity of standard one-photon microscopy (100 microns) and two-photon microscopy (400-500 microns).

This improved imaging depth allowed the team to plumb all six layers of the visual cortex (two-photon microscopy tops out at about three layers), as well as to record in real time the brain’s visual processing activities. Helping the researchers to achieve this feat was the availability of a genetically engineered mouse model in which the cells of the visual cortex are color labelled to distinguish blood vessels from neurons, and to show when neurons are active.

During their in-depth imaging experiments, the MIT researchers found that each of the visual cortex’s six layers exhibited different responses to incoming visual information. One of the team’s most fascinating discoveries is that neurons residing on the subplate are actually quite active in adult animals. It had been assumed that these subplate neurons were active only during development. Their role in mature animals is now an open question for further study.

Sur often likens the work in his neuroscience lab to astronomers and their perpetual quest to see further into the cosmos—but his goal is to see ever deeper into the brain. His group, along with many other researchers supported by the BRAIN Initiative, are indeed proving themselves to be biological explorers of the first order.

Reference:

[1] Functional imaging of visual cortical layers and subplate in awake mice with optimized three-photon microscopy. Yildirim M, Sugihara H, So PTC, Sur M. Nat Commun. 2019 Jan 11;10(1):177.

Links:

Sur Lab (Massachusetts Institute of Technology, Cambridge)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Eye Institute; National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering


Can a Mind-Reading Computer Speak for Those Who Cannot?

Posted on by Dr. Francis Collins

Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York

Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.

Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.

When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.

Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.

As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech [1]. To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.

In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.

From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.

Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.

In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.

The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!

Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.

Reference:

[1] Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.

Links:

Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.

Nima Mesgarani (Columbia University, New York)

NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


Mammalian Brain Like You’ve Never Seen It Before

Posted on by Dr. Francis Collins

Credit: Gao et. al, Science

Researchers are making amazing progress in developing new imaging approaches. And they are now using one of their latest creations, called ExLLSM, to provide us with jaw-dropping views of a wide range of biological systems, including the incredibly complex neural networks within the mammalian brain.

In this video, ExLLSM takes us on a super-resolution, 3D voyage through a tiny sample (0.0030 inches thick) from the part of the mouse brain that processes sensation, the primary somatosensory cortex. The video zooms in and out of densely packed pyramidal neurons (large yellow cell bodies), each of which has about 7,000 synapses, or connections. You can also see presynapses (cyan), the part of the neuron that sends chemical signals; and postsynapes (magenta), the part of the neuron that receives chemical signals.

At 1:45, the video zooms in on dendritic spines, which are mushroom-like nubs on the neuronal branches (yellow). These structures, located on the tips of dendrites, receive incoming signals that are turned into electrical impulses. While dendritic spines have been imaged in black and white with electron microscopy, they’ve never been presented before on such a vast, colorful scale.

The video comes from a paper, published recently in the journal Science [1], from the labs of Ed Boyden, Massachusetts Institute of Technology, Cambridge, and the Nobel Prize-winning Eric Betzig, Janelia Research Campus of the Howard Hughes Medical Institute, Ashburn, VA. Like many collaborations, this one comes with a little story.

Four years ago, the Boyden lab developed expansion microscopy (ExM). The technique involves infusing cells with a hydrogel, made from a chemical used in disposable diapers. The hydrogel expands molecules within the cell away from each other, usually by about 4.5 times, but still locks them into place for remarkable imaging clarity. It makes structures visible by light microscopy that are normally below the resolution limit.

Though the expansion technique has worked well with a small number of cells under a standard light microscope, it hasn’t been as successful—until now—at imaging thicker tissue samples. That’s because thicker tissue is harder to illuminate, and flooding the specimen with light often bleaches out the fluorescent markers that scientists use to label proteins. The signal just fades away.

For Boyden, that was a problem that needed to be solved. Because his lab’s goal is to trace the inner workings of the brain in unprecedented detail, Boyden wants to image entire neural circuits in relatively thick swaths of tissue, not just look at individual cells in isolation.

After some discussion, Boyden’s team concluded that the best solution might be to swap out the light source for the standard microscope with a relatively new imaging tool developed in the Betzig lab. It’s called lattice light-sheet microscopy (LLSM), and the tool generates extremely thin sheets of light that illuminate tissue only in a very tightly defined plane, dramatically reducing light-related bleaching of fluorescent markers in the tissue sample. This allows LLSM to extend its range of image acquisition and quickly deliver stunningly vivid pictures.

Telephone calls were made, and the Betzig lab soon welcomed Ruixuan Gao, Shoh Asano, and colleagues from the Boyden lab to try their hand at combining the two techniques. As the video above shows, ExLLSM has proved to be a perfect technological match. In addition to the movie above, the team has used ExLLSM to provide unprecedented views of a range of samples—from human kidney to neuron bundles in the brain of the fruit fly.

Not only is ExLLSM super-resolution, it’s also super-fast. In fact, the team imaged the entire fruit fly brain in 2 1/2 days—an effort that would take years using an electron microscope.

ExLLSM will likely never supplant the power of electron microscopy or standard fluorescent light microscopy. Still, this new combo imaging approach shows much promise as a complementary tool for biological exploration. The more innovative imaging approaches that researchers have in their toolbox, the better for our ongoing efforts to unlock the mysteries of the brain and other complex biological systems. And yes, those systems are all complex. This is life we’re talking about!

Reference:

[1] Cortical column and whole-brain imaging with molecular contrast and nanoscale resolution. Gao R, Asano SM, Upadhyayula S, Pisarev I, Milkie DE, Liu TL, Singh V, Graves A, Huynh GH, Zhao Y, Bogovic J, Colonell J, Ott CM, Zugates C, Tappan S, Rodriguez A, Mosaliganti KR, Sheu SH, Pasolli HA, Pang S, Xu CS, Megason SG, Hess H, Lippincott-Schwartz J, Hantman A, Rubin GM, Kirchhausen T, Saalfeld S, Aso Y, Boyden ES, Betzig E. Science. 2019 Jan 18;363(6424).

Links:

Video: Expansion Microscopy Explained (YouTube)

Video: Lattice Light-Sheet Microscopy (YouTube)

How to Rapidly Image Entire Brains at Nanoscale Resolution, Howard Hughes Medical Institute, January 17, 2019.

Synthetic Neurobiology Group (Massachusetts Institute of Technology, Cambridge)

Eric Betzig (Janelia Reseach Campus, Ashburn, VA)

NIH Support: National Institute of Neurological Disorders and Stroke; National Human Genome Research Institute; National Institute on Drug Abuse; National Institute of Mental Health; National Institute of Biomedical Imaging and Bioengineering


Mapping the Brain’s Memory Bank

Posted on by Dr. Francis Collins

There’s a lot of groundbreaking research now underway to map the organization and internal wiring of the brain’s hippocampus, essential for memory, emotion, and spatial processing. This colorful video depicting a mouse hippocampus offers a perfect case in point.

The video presents the most detailed 3D atlas of the hippocampus ever produced, highlighting its five previously defined zones: dentate gyrus, CA1, CA2, CA3, and subiculum. The various colors within those zones represent areas with newly discovered and distinctive patterns of gene expression, revealing previously hidden layers of structural organization.

For instance, the subiculum, which sends messages from the hippocampus to other parts of the brain, includes several subregions. The subregions include the three marked in red, yellow, and blue at about 23 seconds into the video.

How’d the researchers do it? In the new study, published in Nature Neuroscience, the researchers started with the Allen Mouse Brain Atlas, a rich, publicly accessible 3D atlas of gene expression in the mouse brain. The team, led by Hong-Wei Dong, University of Southern California, Los Angeles, drilled down into the data to pull up 258 genes that are differentially expressed in the hippocampus and might be helpful for mapping purposes.

Some of those 258 genes were generally expressed only in previously defined portions of the hippocampus. Others were “turned on” only in discrete portions of known hippocampal domains, leading the researchers to define 20 distinct subregions that hadn’t been recognized before.

Combining these data, sophisticated analytical tools, and plenty of hard work, the team assembled this detailed atlas, together with connectivity data, to create a detailed wiring diagram. It includes about 200 signaling pathways that show how all those subregions network together and with other portions of the brain.

What’s really interesting is that the data also showed that these components of the hippocampus contribute to three relatively independent brain-wide communication networks. While much more study is needed, those three networks appear to relate to distinct functions of the hippocampus, including spatial navigation, social behaviors, and metabolism.

This more-detailed view of the hippocampus is just the latest from the NIH-funded Mouse Connectome Project. The ongoing project aims to create a complete connectivity atlas for the entire mouse brain.

The Mouse Connectome Project isn’t just for those with an interest in mice. Indeed, because the mouse and human brain are similarly organized, studies in the smaller mouse brain can help to provide a template for making sense of the larger and more complex human brain, with its tens of billions of interconnected neurons.

Ultimately, the hope is that this understanding of healthy brain connections will provide clues for better treating the brain’s abnormal connections and/or disconnections. They are involved in numerous neurological conditions, including Alzheimer’s disease, Parkinson’s disease, and autism spectrum disorder.

Reference:

[1] Integration of gene expression and brain-wide connectivity reveals the multiscale organization of mouse hippocampal networks. Bienkowski MS, Bowman I, Song MY, Gou L, Ard T, Cotter K, Zhu M, Benavidez NL, Yamashita S, Abu-Jaber J, Azam S, Lo D, Foster NN, Hintiryan H, Dong HW. Nat Neurosci. 2018 Nov;21(11):1628-1643.

Links:
Mouse Connectome Project (University of Southern California, Los Angeles)

Human Connectome Project (USC)

Allen Brain Map (Allen Institute, Seattle)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Institute of Mental Health; National Cancer Institute


Halloween Fly-Through of a Mouse Skull

Posted on by Dr. Francis Collins

Credit: Chai Lab, University of Southern California, Los Angeles

Halloween is full of all kinds of “skulls”—from spooky costumes to ghoulish goodies. So, in keeping with the spirit of the season, I’d like to share this eerily informative video that takes you deep inside the real thing.


Fighting Cancer with Natural Killer Cells

Posted on by Dr. Francis Collins

GIF of immune cells attacking

Credit: Michele Ardolino, University of Ottawa, and Brian Weist, Gilead Sciences, Foster City, CA

Cancer immunotherapies, which enlist a patient’s own immune system to attack and shrink developing tumors, have come a long way in recent years, leading in some instances to dramatic cures of widely disseminated cancers. But, as this video highlights, new insights from immunology are still being revealed that may provide even greater therapeutic potential.

Our immune system comes equipped with all kinds of specialized cells, including the infection-controlling Natural Killer (NK) cells. The video shows an army of NK cells (green) attacking a tumor in a mouse (blood vessels, blue) treated with a well-established type of cancer immunotherapy known as a checkpoint inhibitor. What makes the video so interesting is that researchers didn’t think checkpoint inhibitors could activate NK cells.


Putting Bone Metastasis in the Spotlight

Posted on by Dr. Francis Collins

When cancers spread, or metastasize, from one part of the body to another, bone is a frequent and potentially devastating destination. Now, as you can see in this video, an NIH-funded research team has developed a new system that hopefully will provide us with a better understanding of what goes on when cancer cells invade bone.

In this 3D cross-section, you see the nuclei (green) and cytoplasm (red) of human prostate cancer cells growing inside a bioengineered construct of mouse bone (blue-green) that’s been placed in a mouse. The new system features an imaging window positioned next to the new bone, which enabled the researchers to produce the first series of direct, real-time micrographs of cancer cells eroding the interior of bone.


3D Action Film Stars Cancer Cell as the Villain

Posted on by Dr. Francis Collins

For centuries, microscopes have brought to light the otherwise invisible world of the cell. But microscopes don’t typically visualize the dynamic world of the cell within a living system.

For various technical reasons, researchers have typically had to displace cells, fix them in position, mount them onto slides, and look through a microscope’s viewfinder to see the cells. It can be a little like trying to study life in the ocean by observing a fish cooped up in an 8-gallon tank.

Now, a team partially funded by NIH has developed a new hybrid imaging technology to produce amazing, live-action 3D movies of living cells in their more natural state. In this video, you’re looking at a human breast cancer cell (green) making its way through a blood vessel (purple) of a young zebrafish.

At first, the cancer cell rolls along rather freely. As the cell adheres more tightly to the blood vessel wall, that rolling motion slows to a crawl. Ultimately, the cancer cell finds a place to begin making its way across and through the blood vessel wall, where it can invade other tissues.


Previous Page Next Page