Skip to main content

video

Experiencing the Neural Symphony Underlying Memory through a Blend of Science and Art

Posted on by John Ngai, PhD, NIH BRAIN Initiative

Ever wonder how you’re able to remember life events that happened days, months, or even years ago? You have your hippocampus to thank. This essential area in the brain relies on intense and highly synchronized patterns of activity that aren’t found anywhere else in the brain. They’re called “sharp-wave ripples.”

These dynamic ripples have been likened to the brain version of an instant replay, appearing most commonly during rest after a notable experience. And, now, the top video winner in this year’s Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative’s annual Show Us Your BRAINs! Photo and Video Contest allows you to witness the “chatter” that those ripples set off in other neurons. The details of this chatter determine just how durable a particular memory is in ways neuroscientists are still working hard to understand.

Neuroscientist Saman Abbaspoor in the lab of Kari Hoffman at Vanderbilt University, Nashville, in collaboration with Tyler Sloan from the Montreal-based Quorumetrix Studio, sets the stage in the winning video by showing an electrode or probe implanted in the brain that can reach the hippocampus. This device allows the Hoffman team to wirelessly record neural activity in different layers of the hippocampus as the animal either rests or moves freely about.

In the scenes that follow, neurons (blue, cyan, and yellow) flash on and off. The colors highlight the fact that this brain area and the neurons within it aren’t all the same. Various types of neurons are found in the brain area’s different layers, some of which spark the activity you see, while others dampen it.

Hoffman explains that the specific shapes of individual cells pictured are realistic but also symbolic. While they didn’t trace the individual branches of neurons in the brain in their studies, they relied on information from previous anatomical studies, overlaying their intricate forms with flashing bursts of activity that come straight from their recorded data.

Sloan then added yet another layer of artistry to the experience with what he refers to as sonification, or the use of music to convey information about the dynamic and coordinated bursts of activity in those cells. At five seconds in, you hear the subtle flutter of a sharp-wave ripple. With each burst of active neural chatter that follows, you hear the dramatic plink of piano keys.

Together, their winning video creates a unique sensory experience that helps to explain what goes on during memory formation and recall in a way that words alone can’t adequately describe. Through their ongoing studies, Hoffman reports that they’ll continue delving even deeper into understanding these intricate dynamics and their implications for learning and memory. Ultimately, they also want to explore how brain ripples, and the neural chatter they set off, might be enhanced to make memory formation and recall even stronger.

References:

S Abbaspoor & KL Hoffman. State-dependent circuit dynamics of superficial and deep CA1 pyramidal cells in macaques. BioRxiv DOI: 10.1101/2023.12.06.570369 (2023). Please note that this article is a pre-print and has not been peer-reviewed.

NIH Support: The NIH BRAIN Initiative

This article was updated on Dec. 15, 2023 to reflect better the collaboration on the project among Abbaspoor, Hoffman and Sloan.


From Brain Waves to Real-Time Text Messaging

Posted on by Lawrence Tabak, D.D.S., Ph.D.

For people who have lost the ability to speak due to a severe disability, they want to get the words out. They just can’t physically do it. But in our digital age, there is now a fascinating way to overcome such profound physical limitations. Computers are being taught to decode brain waves as a person tries to speak and then interactively translate them onto a computer screen in real time.

The latest progress, demonstrated in the video above, establishes that it’s quite possible for computers trained with the help of current artificial intelligence (AI) methods to restore a vocabulary of more than a 1,000 words for people with the mental but not physical ability to speak. That covers more than 85 percent of most day-to-day communication in English. With further refinements, the researchers say a 9,000-word vocabulary is well within reach.

The findings published in the journal Nature Communications come from a team led by Edward Chang, University of California, San Francisco [1]. Earlier, Chang and colleagues established that this AI-enabled system could directly decode 50 full words in real time from brain waves alone in a person with paralysis trying to speak [2]. The study is known as BRAVO, short for Brain-computer interface Restoration Of Arm and Voice.

In the latest BRAVO study, the team wanted to figure out how to condense the English language into compact units for easier decoding and expand that 50-word vocabulary. They did it in the same way we all do: by focusing not on complete words, but on the 26-letter alphabet.

The study involved a 36-year-old male with severe limb and vocal paralysis. The team designed a sentence-spelling pipeline for this individual, which enabled him to silently spell out messages using code words corresponding to each of the 26 letters in his head. As he did so, a high-density array of electrodes implanted over the brain’s sensorimotor cortex, part of the cerebral cortex, recorded his brain waves.

A sophisticated system including signal processing, speech detection, word classification, and language modeling then translated those thoughts into coherent words and complete sentences on a computer screen. This so-called speech neuroprosthesis system allows those who have lost their speech to perform roughly the equivalent of text messaging.

Chang’s team put their spelling system to the test first by asking the participant to silently reproduce a sentence displayed on a screen. They then moved on to conversations, in which the participant was asked a question and could answer freely. For instance, as in the video above, when the computer asked, “How are you today?” he responded, “I am very good.” When asked about his favorite time of year, he answered, “summertime.” An attempted hand movement signaled the computer when he was done speaking.

The computer didn’t get it exactly right every time. For instance, in the initial trials with the target sentence, “good morning,” the computer got it exactly right in one case and in another came up with “good for legs.” But, overall, their tests show that their AI device could decode with a high degree of accuracy silently spoken letters to produce sentences from a 1,152-word vocabulary at a speed of about 29 characters per minute.

On average, the spelling system got it wrong 6 percent of the time. That’s really good when you consider how common it is for errors to arise with dictation software or in any text message conversation.

Of course, much more work is needed to test this approach in many more people. They don’t yet know how individual differences or specific medical conditions might affect the outcomes. They suspect that this general approach will work for anyone so long as they remain mentally capable of thinking through and attempting to speak.

They also envision future improvements as part of their BRAVO study. For instance, it may be possible to develop a system capable of more rapid decoding of many commonly used words or phrases. Such a system could then reserve the slower spelling method for other, less common words.

But, as these results clearly demonstrate, this combination of artificial intelligence and silently controlled speech neuroprostheses to restore not just speech but meaningful communication and authentic connection between individuals who’ve lost the ability to speak and their loved ones holds fantastic potential. For that, I say BRAVO.

References:

[1] Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis. Metzger SL, Liu JR, Moses DA, Dougherty ME, Seaton MP, Littlejohn KT, Chartier J, Anumanchipalli GK, Tu-CHan A, Gangly K, Chang, EF. Nature Communications (2022) 13: 6510.

[2] Neuroprosthesis for decoding speech in a paralyzed person with anarthria. Moses DA, Metzger SL, Liu JR, Tu-Chan A, Ganguly K, Chang EF, et al. N Engl J Med. 2021 Jul 15;385(3):217-227.

Links:

Voice, Speech, and Language (National Institute on Deafness and Other Communication Disorders/NIH)

ECoG BMI for Motor and Speech Control (BRAVO) (ClinicalTrials.gov)

Chang Lab (University of California, San Francisco)

NIH Support: National Institute on Deafness and Other Communication Disorders


Immune Macrophages Use Their Own ‘Morse Code’

Posted on by Dr. Francis Collins

Credit: Hoffmann Lab, UCLA

In the language of Morse code, the letter “S” is three short sounds and the letter “O” is three longer sounds. Put them together in the right order and you have a cry for help: S.O.S. Now an NIH-funded team of researchers has cracked a comparable code that specialized immune cells called macrophages use to signal and respond to a threat.

In fact, by “listening in” on thousands of macrophages over time, one by one, the researchers have identified not just a lone distress signal, or “word,” but a vocabulary of six words. Their studies show that macrophages use these six words at different times to launch an appropriate response. What’s more, they have evidence that autoimmune conditions can arise when immune cells misuse certain words in this vocabulary. This bad communication can cause them incorrectly to attack substances produced by the immune system itself as if they were a foreign invaders.

The findings, published recently in the journal Immunity, come from a University of California, Los Angeles (UCLA) team led by Alexander Hoffmann and Adewunmi Adelaja. As an example of this language of immunity, the video above shows in both frames many immune macrophages (blue and red). You may need to watch the video four times to see what’s happening (I did). Each time you run the video, focus on one of the highlighted cells (outlined in white or green), and note how its nuclear signal intensity varies over time. That signal intensity is plotted in the rectangular box at the bottom.

The macrophages come from a mouse engineered in such a way that cells throughout its body light up to reveal the internal dynamics of an important immune signaling protein called nuclear NFκB. With the cells illuminated, the researchers could watch, or “listen in,” on this important immune signal within hundreds of individual macrophages over time to attempt to recognize and begin to interpret potentially meaningful patterns.

On the left side, macrophages are responding to an immune activating molecule called TNF. On the right, they’re responding to a bacterial toxin called LPS. While the researchers could listen to hundreds of cells at once, in the video they’ve randomly selected two cells (outlined in white or green) on each side to focus on in this example.

As shown in the box in the lower portion of each frame, the cells didn’t respond in precisely the same way to the same threat, just like two people might pronounce the same word slightly differently. But their responses nevertheless show distinct and recognizable patterns. Each of those distinct patterns could be decomposed into six code words. Together these six code words serve as a previously unrecognized immune language!

Overall, the researchers analyzed how more than 12,000 macrophage cells communicated in response to 27 different immune threats. Based on the possible arrangement of temporal nuclear NFκB dynamics, they then generated a list of more than 900 pattern features that could be potential “code words.”

Using an algorithm developed decades ago for the telecommunications industry, they then monitored which of the potential words showed up reliably when macrophages responded to a particular threatening stimulus, such as a bacterial or viral toxin. This narrowed their list to six specific features, or “words,” that correlated with a particular response.

To confirm that these pattern features contained meaning, the team turned to machine learning. If they taught a computer just those six words, they asked, could it distinguish the external threats to which the computerized cells were responding? The answer was yes.

But what if the computer had five words available, instead of six? The researchers found that the computer made more mistakes in recognizing the stimulus, leading the team to conclude that all six words are indeed needed for reliable cellular communication.

To begin to explore the implications of their findings for understanding autoimmune diseases, the researchers conducted similar studies in macrophages from a mouse model of Sjögren’s syndrome, a systemic condition in which the immune system often misguidedly attacks cells that produce saliva and tears. When they listened in on these cells, they found that they used two of the six words incorrectly. As a result, they activated the wrong responses, causing the body to mistakenly perceive a serious threat and attack itself.

While previous studies have proposed that immune cells employ a language, this is the first to identify words in that language, and to show what can happen when those words are misused. Now that researchers have a list of words, the next step is to figure out their precise definitions and interpretations [2] and, ultimately, how their misuse may be corrected to treat immunological diseases.

References:

[1] Six distinct NFκB signaling codons convey discrete information to distinguish stimuli and enable appropriate macrophage responses. Adelaja A, Taylor B, Sheu KM, Liu Y, Luecke S, Hoffmann A. Immunity. 2021 May 11;54(5):916-930.e7.

[2] NF-κB dynamics determine the stimulus specificity of epigenomic reprogramming in macrophages. Cheng QJ, Ohta S, Sheu KM, Spreafico R, Adelaja A, Taylor B, Hoffmann A. Science. 2021 Jun 18;372(6548):1349-1353.

Links:

Overview of the Immune System (National Institute of Allergy and Infectious Diseases/NIH)

Sjögren’s Syndrome (National Institute of Dental and Craniofacial Research/NIH)

Alexander Hoffmann (UCLA)

NIH Support: National Institute of General Medical Sciences; National Institute of Allergy and Infectious Diseases


Using R2D2 to Understand RNA Folding

Posted on by Dr. Francis Collins

If you love learning more about biology at a fundamental level, I have a great video for you! It simulates the 3D folding of RNA. RNA is a single stranded molecule, but it is still capable of forming internal loops that can be stabilized by base pairing, just like its famously double-stranded parent, DNA. Understanding more about RNA folding may be valuable in many different areas of biomedical research, including developing ways to help people with RNA-related diseases, such as certain cancers and neuromuscular disorders, and designing better mRNA vaccines against infectious disease threats (like COVID-19).

Because RNA folding starts even while an RNA is still being made in the cell, the process has proven hugely challenging to follow closely. An innovative solution, shown in this video, comes from the labs of NIH grantees Julius Lucks, Northwestern University, Evanston, IL, and Alan Chen, State University of New York at Albany. The team, led by graduate student Angela Yu and including several diehard Star Wars fans, realized that to visualize RNA folding they needed a technology platform that, like a Star Wars droid, is able to “see” things that others can’t. So, they created R2D2, which is short for Reconstructing RNA Dynamics from Data.

What’s so groundbreaking about the R2D2 approach, which was published recently in Molecular Cell, is that it combines experimental data on RNA folding at the nucleotide level with predictive algorithms at the atomic level to simulate RNA folding in ultra-slow motion [1]. While other computer simulations have been available for decades, they have lacked much-needed experimental data of this complex folding process to confirm their mathematical modeling.

As a gene is transcribed into RNA one building block, or nucleotide, at a time, the elongating RNA strand folds immediately before the whole molecule is fully assembled. But such folding can create a problem: the new strand can tie itself up into a knot-like structure that’s incompatible with the shape it needs to function in a cell.

To slip this knot, the cell has evolved immediate corrective pathways, or countermoves. In this R2D2 video, you can see one countermove called a toehold-mediated strand displacement. In this example, the maneuver is performed by an ancient molecule called a single recognition particle (SRP) RNA. Though SRP RNAs are found in all forms of life, this one comes from the bacterium Escherichia coli and is made up of 114 nucleotides.

The colors in this video highlight different domains of the RNA molecule, all at different stages in the folding process. Some (orange, turquoise) have already folded properly, while another domain (dark purple) is temporarily knotted. For this knotted domain to slip its knot, about 5 seconds into the video, another newly forming region (fuchsia) wiggles down to gain a “toehold.” About 9 seconds in, the temporarily knotted domain untangles and unwinds, and, finally, at about 23 seconds, the strand starts to get reconfigured into the shape it needs to do its job in the cell.

Why would evolution favor such a seemingly inefficient folding process? Well, it might not be inefficient as it first appears. In fact, as Chen noted, some nanotechnologists previously invented toehold displacement as a design principle for generating synthetic DNA and RNA circuits. Little did they know that nature may have scooped them many millennia ago!

Reference:

[1] Computationally reconstructing cotranscriptional RNA folding from experimental data reveals rearrangement of non-naïve folding intermediates. Yu AM, Gasper PM Cheng L, Chen AA, Lucks JB, et. al. Molecular Cell 8, 1-14. 18 February 2021.

Links:

Ribonucleic Acid (RNA) (National Human Genome Research Institute/NIH)

Chen Lab (State University of New York at Albany)

Lucks Laboratory (Northwestern University, Evanston IL)

NIH Support: National Institute of General Medical Sciences; Common Fund


Welcoming First Lady Jill Biden to NIH!

Posted on by Dr. Francis Collins

Video Event

It was wonderful to have First Lady Jill Biden pay a virtual visit to NIH on February 3, 2021, on the eve of World Cancer Day. Dr. Biden joined me, National Cancer Institute (NCI) Director Ned Sharpless, and several NCI scientists to discuss recent advances in fighting cancer. On behalf of the entire NIH community, I thanked the First Lady for her decades of advocacy on behalf of cancer education, prevention, and research. To view the event, go to 53:20 in this video. Credit: Adapted from White House video.


NIH at 80: Sharing a Timeless Message from President Roosevelt

Posted on by Dr. Francis Collins

This Saturday, October 31, marks an important milestone in American public health: the 80th anniversary of President Franklin Delano Roosevelt’s dedication of the campus of the National Institutes of Health (NIH) in Bethesda, MD. The President’s stirring speech, delivered from the steps of NIH’s brand-new Administration Building (now called Building 1), was much more than a ribbon-cutting ceremony. It gave voice to NIH’s commitment to using the power of science “to do infinitely more” for the health of all people with “no distinctions of race, of creed, or of color.”

“We cannot be a strong nation unless we are a healthy nation. And so, we must recruit not only men and materials, but also knowledge and science in the service of national strength,” Roosevelt told the crowd of about 3,000. To get a sense of what it was like to be there on that historic day, I encourage you to check out the archival video footage above from the National Archives and Records Administration (NARA).

These words from our 32nd President are especially worth revisiting for their enduring wisdom during a time of national crisis. In October 1940, with World War II raging overseas, the United States faced the prospect of defending its shores and territories from foreign forces. Yet, at the same time as he was bolstering U.S. military capacity, Roosevelt emphasized that it was also essential to use biomedical research to shore up our nation’s defenses against the threats of infectious disease. In a particularly prescient section of the speech, he said: “Now that we are less than a day by plane from the jungle-type yellow fever of South America, less than two days from the sleeping sickness of equatorial Africa, less than three days from cholera and bubonic plague, the ramparts we watch must be civilian in addition to military.”

Today, in the midst of another national crisis—the COVID-19 pandemic—a similar vision is inspiring the work of NIH. With the aim of defending the health of all populations, we are supporting science to understand the novel coronavirus that causes COVID-19 and to develop tests, treatments, and vaccines for this disease that has already killed more than 225,000 Americans and infected more than 8.6 million.

As part of the dedication ceremony, Roosevelt thanked the Luke and Helen Wilson family for donating their 70-acre estate, “Tree Tops,” to serve as a new home for NIH. (Visitors to Wilson Hall in Building 1 will see portraits of the Wilsons.) Founded in 1887, NIH had previously been housed in a small lab on Staten Island, and then in two cramped lab buildings in downtown Washington, D.C. The move to Bethesda, with NIH’s first six buildings already dotting the landscape as Roosevelt spoke, gave the small agency room to evolve into what today is the world’s largest supporter of biomedical research.

Yet, as FDR gazed out over our fledging campus on that autumn day so long ago, he knew that NIH’s true mission would extend far beyond simply conducting science to providing much-needed hope to humans around the world. As he put it in his closing remarks: “I voice for America and for the stricken world, our hopes, our prayers, our faith, in the power of man’s humanity to man.”

On the 80th anniversary of NIH’s move to Bethesda, I could not agree more. Our science—and our humanity—will get us through this pandemic and show the path forward to brighter days ahead.

Links:

Who We Are: History (NIH)

Office of NIH History and Stetten Museum (NIH)

70 Acres of Science” (Office of NIH History)

Coronavirus (COVID-19) (NIH)


The Amazing Brain: Shining a Spotlight on Individual Neurons

Posted on by Dr. Francis Collins

A major aim of the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative is to develop new technologies that allow us to look at the brain in many different ways on many different scales. So, I’m especially pleased to highlight this winner of the initiative’s recent “Show Us Your Brain!” contest.

Here you get a close-up look at pyramidal neurons located in the hippocampus, a region of the mammalian brain involved in memory. While this tiny sample of mouse brain is densely packed with many pyramidal neurons, researchers used new ExLLSM technology to zero in on just three. This super-resolution, 3D view reveals the intricacies of each cell’s structure and branching patterns.

The group that created this award-winning visual includes the labs of X. William Yang at the University of California, Los Angeles, and Kwanghun Chung at the Massachusetts Institute of Technology, Cambridge. Chung’s team also produced another quite different “Show Us Your Brain!” winner, a colorful video featuring hundreds of neural cells and connections in a part of the brain essential to movement.

Pyramidal neurons in the hippocampus come in many different varieties. Some important differences in their functional roles may be related to differences in their physical shapes, in ways that aren’t yet well understood. So, BRAIN-supported researchers are now applying a variety of new tools and approaches in a more detailed effort to identify and characterize these neurons and their subtypes.

The video featured here took advantage of Chung’s new method for preserving brain tissue samples [1]. Another secret to its powerful imagery was a novel suite of mouse models developed in the Yang lab. With some sophisticated genetics, these models make it possible to label, at random, just 1 to 5 percent of a given neuronal cell type, illuminating their full morphology in the brain [2]. The result was this unprecedented view of three pyramidal neurons in exquisite 3D detail.

Ultimately, the goal of these and other BRAIN Initiative researchers is to produce a dynamic picture of the brain that, for the first time, shows how individual cells and complex neural circuits interact in both time and space. I look forward to their continued progress, which promises to revolutionize our understanding of how the human brain functions in both health and disease.

References:

[1] Protection of tissue physicochemical properties using polyfunctional crosslinkers. Park YG, Sohn CH, Chen R, McCue M, Yun DH, Drummond GT, Ku T, Evans NB, Oak HC, Trieu W, Choi H, Jin X, Lilascharoen V, Wang J, Truttmann MC, Qi HW, Ploegh HL, Golub TR, Chen SC, Frosch MP, Kulik HJ, Lim BK, Chung K. Nat Biotechnol. 2018 Dec 17.

[2] Genetically-directed Sparse Neuronal Labeling in BAC Transgenic Mice through Mononucleotide Repeat Frameshift. Lu XH, Yang XW. Sci Rep. 2017 Mar 8;7:43915.

Links:

Chung Lab (Massachusetts Institute of Technology, Cambridge)

Yang Lab (University of California, Los Angeles)

Show Us Your Brain! (BRAIN Initiative/NIH)

Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Institute of Mental Health; National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering


Watch Flowers Spring to Life

Posted on by Dr. Francis Collins

Spring has sprung! The famous Washington cherry blossoms have come and gone, and the tulips and azaleas are in full bloom. In this mesmerizing video, you’ll get a glimpse of the early steps in how some spring flowers bloom.

Floating into view are baby flowers, their cells outlined (red), at the tip of the stem of the mustard plant Arabidopsis thaliana. Stem cells that contain the gene STM (green) huddle in the center of this fast-growing region of the plant stem—these stem cells will later make all of the flower parts.

As the video pans out, slightly older flowers come into view. These contain organs called sepals (red, bumpy outer regions) that will grow into leafy support structures for the flower’s petals.

Movie credits go to Nathanaёl Prunet, an assistant professor at the University of California, Los Angeles, who shot this video while working in the NIH-supported lab of Elliot Meyerowitz at the California Institute of Technology, Pasadena. Prunet used confocal microscopy to display the different ages and stages of the developing flowers, generating a 3D data set of images. He then used software to produce a bird’s-eye view of those images and turned it into a cool movie. The video was one of the winners in the Federation of American Societies for Experimental Biology’s 2018 BioArt competition.

Beyond being cool, this video shows how a single gene, STM, plays a starring role in plant development. This gene acts like a molecular fountain of youth, keeping cells ever-young until it’s time to grow up and commit to making flowers and other plant parts.

Like humans, most plants begin life as a fertilized cell that divides over and over—first into a multi-cell embryo and then into mature parts, or organs. Because of its ease of use and low cost, Arabidopsis is a favorite model for scientists to learn the basic principles driving tissue growth and regrowth for humans as well as the beautiful plants outside your window. Happy Spring!

Links:

Meyerowitz Lab (California Institute of Technology, Pasadena)

Prunet Lab (University of California, Los Angeles)

The Arabidosis Information Resource (Phoenix Bioinformatics, Fremont, CA)

BioArt Scientific Image and Video Competition (Federation of American Societies for Experimental Biology, Bethesda, MD)

NIH Support: National Institute of General Medical Sciences


Finding Beauty in the Nervous System of a Fruit Fly Larva

Posted on by Dr. Francis Collins

Wow! Click on the video. If you’ve ever wondered where those pesky flies in your fruit bowl come from, you’re looking at it right now. It’s a fruit fly larva. And this 3D movie offers never-before-seen details into proprioception—the brain’s sixth sense of knowing the body’s location relative to nearby objects or, in this case, fruit.

This live-action video highlights the movement of the young fly’s proprioceptive nerve cells. They send signals to the fly brain that are essential for tracking the body’s position in space and coordinating movement. The colors indicate the depth of the nerve cells inside the body, showing those at the surface (orange) and those further within (blue).

Such movies make it possible, for the first time, to record precisely how every one of these sensory cells is arranged within the body. They also provide a unique window into how body positions are dynamically encoded in these cells, as a segmented larva inches along in search of food.

The video was created using a form of confocal microscopy called Swept Confocally Aligned Planar Excitation, or SCAPE. It captures 3D images by sweeping a sheet of laser light back and forth across a living sample. Even better, it does this while the microscope remains completely stationary—no need for a researcher to move any lenses up or down, or hold a live sample still.

Most impressively, with this new high-speed technology, developed with support from the NIH’s BRAIN Initiative, researchers are now able to capture videos like the one seen above in record time, with each whole volume recorded in under 1/10th of a second! That’s hundreds of times faster than with a conventional microscope, which scans objects point by point.

As reported in Current Biology, the team, led by Elizabeth Hillman and Wesley Grueber, Columbia University, New York, didn’t stop at characterizing the structural details and physical movements of nerve cells involved in proprioception in a crawling larva. In another set of imaging experiments, they went a step further, capturing faint flashes of green in individual labeled nerve cells each time they fired. (You have to look very closely to see them.) With each wave of motion, proprioceptive nerve cells light up in sequence, demonstrating precisely when they are sending signals to the animal’s brain.

From such videos, the researchers have generated a huge amount of data on the position and activity of each proprioceptive nerve cell. The data show that the specific position of each cell makes it uniquely sensitive to changes in position of particular segments of a larva’s body. While most of the proprioceptive nerve cells fired when their respective body segment contracted, others were attuned to fire when a larval segment stretched.

Taken together, the data show that proprioceptive nerve cells provide the brain with a detailed sequence of signals, reflecting each part of a young fly’s undulating body. It’s clear that every proprioceptive neuron has a unique role to play in the process. The researchers now will create similar movies capturing neurons in the fly’s central nervous system.

A holy grail of the BRAIN Initiative is to capture the brain in action. With these advances in imaging larval flies, researchers are getting ever closer to understanding the coordinated activities of an organism’s complete nervous system—though this one is a lot simpler than ours! And perhaps this movie—and the anticipation of the sequels to come—may even inspire a newfound appreciation for those pesky flies that sometimes hover nearby.

Reference:

[1] Characterization of Proprioceptive System Dynamics in Behaving Drosophila Larvae Using High-Speed Volumetric Microscopy. Vaadia RD, Li W, Voleti V, Singhania A, Hillman EMC, Grueber WB. Curr Biol. 2019 Mar 18;29(6):935-944.e4.

Links:

Using Research Organisms to Study Health and Disease (National Institute of General Medical Sciences/NIH)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

Hillman Lab (Columbia University, New York)

Grueber Lab (Columbia University, New York)

NIH Support: National Institute of Neurological Disorders and Stroke; Eunice Kennedy Shriver National Institute of Child Health and Human Development


World’s Smallest Tic-Tac-Toe Game Built from DNA

Posted on by Dr. Francis Collins

Check out the world’s smallest board game, a nanoscale match of tic-tac-toe being played out in a test tube with X’s and O’s made of DNA. But the innovative approach you see demonstrated in this video is much more than fun and games. Ultimately, researchers hope to use this technology to build tiny DNA machines for a wide variety of biomedical applications.

Here’s how it works. By combining two relatively recent technologies, an NIH-funded team led by Lulu Qian, California Institute of Technology, Pasadena, CA, created a “swapping mechanism” that programs dynamic interactions between complex DNA nanostructures [1]. The approach takes advantage of DNA’s modular structure, along with its tendency to self-assemble, based on the ability of the four letters of DNA’s chemical alphabet to pair up in an orderly fashion, A to T and C to G.

To make each of the X or O tiles in this game (displayed here in an animated cartoon version), researchers started with a single, long strand of DNA and many much shorter strands, called staples. When the sequence of DNA letters in each of those components is arranged just right, the longer strand will fold up into the desired 2D or 3D shape. This technique is called DNA origami because of its similarity to the ancient art of Japanese paper folding.

In the early days of DNA origami, researchers showed the technique could be used to produce miniature 2D images, such as a smiley face [2]. Last year, the Caltech group got more sophisticated—using DNA origami to produce the world’s smallest reproduction of the Mona Lisa [3].

In the latest work, published in Nature Communications, Qian, Philip Petersen and Grigory Tikhomirov first mixed up a solution of nine blank DNA origami tiles in a test tube. Those DNA tiles assembled themselves into a tic-tac-toe grid. Next, two players took turns adding one of nine X or O DNA tiles into the solution. Each of the game pieces was programmed precisely to swap out only one of the tile positions on the original, blank grid, based on the DNA sequences positioned along its edges.

When the first match was over, player X had won! More importantly for future biomedical applications, the original, blank grid had been fully reconfigured into a new structure, built of all-new, DNA-constructed components. That achievement shows not only can researchers use DNA to build miniature objects, they can also use DNA to repair or reconfigure such objects.

Of course, the ultimate aim of this research isn’t to build games or reproduce famous works of art. Qian wants to see her DNA techniques used to produce tiny automated machines, capable of performing basic tasks on a molecular scale. In fact, her team already has used a similar approach to build nano-sized DNA robots, programmed to sort molecules in much the same way that a person might sort laundry [4]. Such robots may prove useful in miniaturized approaches to drug discovery, development, manufacture, and/or delivery.

Another goal of the Caltech team is to demonstrate to the scientific community what’s possible with this leading-edge technology, in hopes that other researchers will pick up their innovative tools for their own applications. That would be a win-win for us all.

References:

[1] Information-based autonomous reconfiguration in systems of DNA nanostructures. Petersen P, Tikhomirov G, Qian L. Nat Commun. 2018 Dec 18;9(1):5362

[2] Folding DNA to create nanoscale shapes and patterns. Rothemund PW. Nature. 2006 Mar 16;440(7082):297-302.

[3] Fractal assembly of micrometre-scale DNA origami arrays with arbitrary patterns. Tikhomirov G, Petersen P, Qian L. Nature. 2017 Dec 6;552(7683):67-71.

[4] A cargo-sorting DNA robot. Thubagere AJ, Li W, Johnson RF, Chen Z, Doroudi S, Lee YL, Izatt G, Wittman S, Srinivas N, Woods D, Winfree E, Qian L. Science. 2017 Sep 15;357(6356).

Links:

Paul Rothemund—DNA Origami: Folded DNA as a Building Material for Molecular Devices (Cal Tech, Pasadena)

The World’s Smallest Mona Lisa (Caltech)

Qian Lab (Caltech, Pasadena, CA)

NIH Support: National Institute of General Medical Sciences


Next Page