Skip to main content

Cool Videos

Watch Flowers Spring to Life

Posted on by

Spring has sprung! The famous Washington cherry blossoms have come and gone, and the tulips and azaleas are in full bloom. In this mesmerizing video, you’ll get a glimpse of the early steps in how some spring flowers bloom.

Floating into view are baby flowers, their cells outlined (red), at the tip of the stem of the mustard plant Arabidopsis thaliana. Stem cells that contain the gene STM (green) huddle in the center of this fast-growing region of the plant stem—these stem cells will later make all of the flower parts.

As the video pans out, slightly older flowers come into view. These contain organs called sepals (red, bumpy outer regions) that will grow into leafy support structures for the flower’s petals.

Movie credits go to Nathanaёl Prunet, an assistant professor at the University of California, Los Angeles, who shot this video while working in the NIH-supported lab of Elliot Meyerowitz at the California Institute of Technology, Pasadena. Prunet used confocal microscopy to display the different ages and stages of the developing flowers, generating a 3D data set of images. He then used software to produce a bird’s-eye view of those images and turned it into a cool movie. The video was one of the winners in the Federation of American Societies for Experimental Biology’s 2018 BioArt competition.

Beyond being cool, this video shows how a single gene, STM, plays a starring role in plant development. This gene acts like a molecular fountain of youth, keeping cells ever-young until it’s time to grow up and commit to making flowers and other plant parts.

Like humans, most plants begin life as a fertilized cell that divides over and over—first into a multi-cell embryo and then into mature parts, or organs. Because of its ease of use and low cost, Arabidopsis is a favorite model for scientists to learn the basic principles driving tissue growth and regrowth for humans as well as the beautiful plants outside your window. Happy Spring!

Links:

Meyerowitz Lab (California Institute of Technology, Pasadena)

Prunet Lab (University of California, Los Angeles)

The Arabidosis Information Resource (Phoenix Bioinformatics, Fremont, CA)

BioArt Scientific Image and Video Competition (Federation of American Societies for Experimental Biology, Bethesda, MD)

NIH Support: National Institute of General Medical Sciences


Finding Beauty in the Nervous System of a Fruit Fly Larva

Posted on by

Wow! Click on the video. If you’ve ever wondered where those pesky flies in your fruit bowl come from, you’re looking at it right now. It’s a fruit fly larva. And this 3D movie offers never-before-seen details into proprioception—the brain’s sixth sense of knowing the body’s location relative to nearby objects or, in this case, fruit.

This live-action video highlights the movement of the young fly’s proprioceptive nerve cells. They send signals to the fly brain that are essential for tracking the body’s position in space and coordinating movement. The colors indicate the depth of the nerve cells inside the body, showing those at the surface (orange) and those further within (blue).

Such movies make it possible, for the first time, to record precisely how every one of these sensory cells is arranged within the body. They also provide a unique window into how body positions are dynamically encoded in these cells, as a segmented larva inches along in search of food.

The video was created using a form of confocal microscopy called Swept Confocally Aligned Planar Excitation, or SCAPE. It captures 3D images by sweeping a sheet of laser light back and forth across a living sample. Even better, it does this while the microscope remains completely stationary—no need for a researcher to move any lenses up or down, or hold a live sample still.

Most impressively, with this new high-speed technology, developed with support from the NIH’s BRAIN Initiative, researchers are now able to capture videos like the one seen above in record time, with each whole volume recorded in under 1/10th of a second! That’s hundreds of times faster than with a conventional microscope, which scans objects point by point.

As reported in Current Biology, the team, led by Elizabeth Hillman and Wesley Grueber, Columbia University, New York, didn’t stop at characterizing the structural details and physical movements of nerve cells involved in proprioception in a crawling larva. In another set of imaging experiments, they went a step further, capturing faint flashes of green in individual labeled nerve cells each time they fired. (You have to look very closely to see them.) With each wave of motion, proprioceptive nerve cells light up in sequence, demonstrating precisely when they are sending signals to the animal’s brain.

From such videos, the researchers have generated a huge amount of data on the position and activity of each proprioceptive nerve cell. The data show that the specific position of each cell makes it uniquely sensitive to changes in position of particular segments of a larva’s body. While most of the proprioceptive nerve cells fired when their respective body segment contracted, others were attuned to fire when a larval segment stretched.

Taken together, the data show that proprioceptive nerve cells provide the brain with a detailed sequence of signals, reflecting each part of a young fly’s undulating body. It’s clear that every proprioceptive neuron has a unique role to play in the process. The researchers now will create similar movies capturing neurons in the fly’s central nervous system.

A holy grail of the BRAIN Initiative is to capture the brain in action. With these advances in imaging larval flies, researchers are getting ever closer to understanding the coordinated activities of an organism’s complete nervous system—though this one is a lot simpler than ours! And perhaps this movie—and the anticipation of the sequels to come—may even inspire a newfound appreciation for those pesky flies that sometimes hover nearby.

Reference:

[1] Characterization of Proprioceptive System Dynamics in Behaving Drosophila Larvae Using High-Speed Volumetric Microscopy. Vaadia RD, Li W, Voleti V, Singhania A, Hillman EMC, Grueber WB. Curr Biol. 2019 Mar 18;29(6):935-944.e4.

Links:

Using Research Organisms to Study Health and Disease (National Institute of General Medical Sciences/NIH)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Hillman Lab (Columbia University, New York)

Grueber Lab (Columbia University, New York)

NIH Support: National Institute of Neurological Disorders and Stroke; Eunice Kennedy Shriver National Institute of Child Health and Human Development


World’s Smallest Tic-Tac-Toe Game Built from DNA

Posted on by

Check out the world’s smallest board game, a nanoscale match of tic-tac-toe being played out in a test tube with X’s and O’s made of DNA. But the innovative approach you see demonstrated in this video is much more than fun and games. Ultimately, researchers hope to use this technology to build tiny DNA machines for a wide variety of biomedical applications.

Here’s how it works. By combining two relatively recent technologies, an NIH-funded team led by Lulu Qian, California Institute of Technology, Pasadena, CA, created a “swapping mechanism” that programs dynamic interactions between complex DNA nanostructures [1]. The approach takes advantage of DNA’s modular structure, along with its tendency to self-assemble, based on the ability of the four letters of DNA’s chemical alphabet to pair up in an orderly fashion, A to T and C to G.

To make each of the X or O tiles in this game (displayed here in an animated cartoon version), researchers started with a single, long strand of DNA and many much shorter strands, called staples. When the sequence of DNA letters in each of those components is arranged just right, the longer strand will fold up into the desired 2D or 3D shape. This technique is called DNA origami because of its similarity to the ancient art of Japanese paper folding.

In the early days of DNA origami, researchers showed the technique could be used to produce miniature 2D images, such as a smiley face [2]. Last year, the Caltech group got more sophisticated—using DNA origami to produce the world’s smallest reproduction of the Mona Lisa [3].

In the latest work, published in Nature Communications, Qian, Philip Petersen and Grigory Tikhomirov first mixed up a solution of nine blank DNA origami tiles in a test tube. Those DNA tiles assembled themselves into a tic-tac-toe grid. Next, two players took turns adding one of nine X or O DNA tiles into the solution. Each of the game pieces was programmed precisely to swap out only one of the tile positions on the original, blank grid, based on the DNA sequences positioned along its edges.

When the first match was over, player X had won! More importantly for future biomedical applications, the original, blank grid had been fully reconfigured into a new structure, built of all-new, DNA-constructed components. That achievement shows not only can researchers use DNA to build miniature objects, they can also use DNA to repair or reconfigure such objects.

Of course, the ultimate aim of this research isn’t to build games or reproduce famous works of art. Qian wants to see her DNA techniques used to produce tiny automated machines, capable of performing basic tasks on a molecular scale. In fact, her team already has used a similar approach to build nano-sized DNA robots, programmed to sort molecules in much the same way that a person might sort laundry [4]. Such robots may prove useful in miniaturized approaches to drug discovery, development, manufacture, and/or delivery.

Another goal of the Caltech team is to demonstrate to the scientific community what’s possible with this leading-edge technology, in hopes that other researchers will pick up their innovative tools for their own applications. That would be a win-win for us all.

References:

[1] Information-based autonomous reconfiguration in systems of DNA nanostructures. Petersen P, Tikhomirov G, Qian L. Nat Commun. 2018 Dec 18;9(1):5362

[2] Folding DNA to create nanoscale shapes and patterns. Rothemund PW. Nature. 2006 Mar 16;440(7082):297-302.

[3] Fractal assembly of micrometre-scale DNA origami arrays with arbitrary patterns. Tikhomirov G, Petersen P, Qian L. Nature. 2017 Dec 6;552(7683):67-71.

[4] A cargo-sorting DNA robot. Thubagere AJ, Li W, Johnson RF, Chen Z, Doroudi S, Lee YL, Izatt G, Wittman S, Srinivas N, Woods D, Winfree E, Qian L. Science. 2017 Sep 15;357(6356).

Links:

Paul Rothemund—DNA Origami: Folded DNA as a Building Material for Molecular Devices (Cal Tech, Pasadena)

The World’s Smallest Mona Lisa (Caltech)

Qian Lab (Caltech, Pasadena, CA)

NIH Support: National Institute of General Medical Sciences


Taking Brain Imaging Even Deeper

Posted on by

Thanks to yet another amazing advance made possible by the NIH-led supported the Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, I can now take you on a 3D fly-through of all six layers of the part of the mammalian brain that processes external signals into vision. This unprecedented view is made possible by three-photon microscopy, a low-energy imaging approach that is allowing researchers to peer deeply within the brains of living creatures without damaging or killing their brain cells.

The basic idea of multi-photon microscopy is this: for fluorescence microscopy to work, you want to deliver a specific energy level of photons (usually with a laser) to excite a fluorescent molecule, so that it will emit light at a slightly lower energy (longer wavelength) and be visualized as a burst of colored light in the microscope. That’s how fluorescence works. Green fluorescent protein (GFP) is one of many proteins that can be engineered into cells or mice to make that possible.

But for that version of the approach to work on tissue, the excited photons need to penetrate deeply, and that’s not possible for such high energy photons. So two-photon strategies were developed, where it takes the sum of the energy of two simultaneous photons to hit the target in order to activate the fluorophore.

That approach has made a big difference, but for deep tissue penetration the photons are still too high in energy. Enter the three-photon version! Now the even lower energy of the photons makes tissue more optically transparent, though for activation of the fluorescent protein, three photons have to hit it simultaneously. But that’s part of the beauty of the system—the visual “noise” also goes down.

This particular video shows what takes place in the visual cortex of mice when objects pass before their eyes. As the objects appear, specific neurons (green) are activated to process the incoming information. Nearby, and slightly obscuring the view, are the blood vessels (pink, violet) that nourish the brain. At 33 seconds into the video, you can see the neurons’ myelin sheaths (pink) branching into the white matter of the brain’s subplate, which plays a key role in organizing the visual cortex during development.

This video comes from a recent paper in Nature Communications by a team from Massachusetts Institute of Technology, Cambridge [1]. To obtain this pioneering view of the brain, Mriganka Sur, Murat Yildirim, and their colleagues built an innovative microscope that emits three low-energy photons. After carefully optimizing the system, they were able to peer more than 1,000 microns (0.05 inches) deep into the visual cortex of a live, alert mouse, far surpassing the imaging capacity of standard one-photon microscopy (100 microns) and two-photon microscopy (400-500 microns).

This improved imaging depth allowed the team to plumb all six layers of the visual cortex (two-photon microscopy tops out at about three layers), as well as to record in real time the brain’s visual processing activities. Helping the researchers to achieve this feat was the availability of a genetically engineered mouse model in which the cells of the visual cortex are color labelled to distinguish blood vessels from neurons, and to show when neurons are active.

During their in-depth imaging experiments, the MIT researchers found that each of the visual cortex’s six layers exhibited different responses to incoming visual information. One of the team’s most fascinating discoveries is that neurons residing on the subplate are actually quite active in adult animals. It had been assumed that these subplate neurons were active only during development. Their role in mature animals is now an open question for further study.

Sur often likens the work in his neuroscience lab to astronomers and their perpetual quest to see further into the cosmos—but his goal is to see ever deeper into the brain. His group, along with many other researchers supported by the BRAIN Initiative, are indeed proving themselves to be biological explorers of the first order.

Reference:

[1] Functional imaging of visual cortical layers and subplate in awake mice with optimized three-photon microscopy. Yildirim M, Sugihara H, So PTC, Sur M. Nat Commun. 2019 Jan 11;10(1):177.

Links:

Sur Lab (Massachusetts Institute of Technology, Cambridge)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Eye Institute; National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering


Can a Mind-Reading Computer Speak for Those Who Cannot?

Posted on by

Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York

Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.

Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.

When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.

Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.

As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech [1]. To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.

In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.

From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.

Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.

In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.

The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!

Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.

Reference:

[1] Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.

Links:

Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.

Nima Mesgarani (Columbia University, New York)

NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


Next Page