Skip to main content

brain

Taking Brain Imaging Even Deeper

Posted on by

Thanks to yet another amazing advance made possible by the NIH-led supported the Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, I can now take you on a 3D fly-through of all six layers of the part of the mammalian brain that processes external signals into vision. This unprecedented view is made possible by three-photon microscopy, a low-energy imaging approach that is allowing researchers to peer deeply within the brains of living creatures without damaging or killing their brain cells.

The basic idea of multi-photon microscopy is this: for fluorescence microscopy to work, you want to deliver a specific energy level of photons (usually with a laser) to excite a fluorescent molecule, so that it will emit light at a slightly lower energy (longer wavelength) and be visualized as a burst of colored light in the microscope. That’s how fluorescence works. Green fluorescent protein (GFP) is one of many proteins that can be engineered into cells or mice to make that possible.

But for that version of the approach to work on tissue, the excited photons need to penetrate deeply, and that’s not possible for such high energy photons. So two-photon strategies were developed, where it takes the sum of the energy of two simultaneous photons to hit the target in order to activate the fluorophore.

That approach has made a big difference, but for deep tissue penetration the photons are still too high in energy. Enter the three-photon version! Now the even lower energy of the photons makes tissue more optically transparent, though for activation of the fluorescent protein, three photons have to hit it simultaneously. But that’s part of the beauty of the system—the visual “noise” also goes down.

This particular video shows what takes place in the visual cortex of mice when objects pass before their eyes. As the objects appear, specific neurons (green) are activated to process the incoming information. Nearby, and slightly obscuring the view, are the blood vessels (pink, violet) that nourish the brain. At 33 seconds into the video, you can see the neurons’ myelin sheaths (pink) branching into the white matter of the brain’s subplate, which plays a key role in organizing the visual cortex during development.

This video comes from a recent paper in Nature Communications by a team from Massachusetts Institute of Technology, Cambridge [1]. To obtain this pioneering view of the brain, Mriganka Sur, Murat Yildirim, and their colleagues built an innovative microscope that emits three low-energy photons. After carefully optimizing the system, they were able to peer more than 1,000 microns (0.05 inches) deep into the visual cortex of a live, alert mouse, far surpassing the imaging capacity of standard one-photon microscopy (100 microns) and two-photon microscopy (400-500 microns).

This improved imaging depth allowed the team to plumb all six layers of the visual cortex (two-photon microscopy tops out at about three layers), as well as to record in real time the brain’s visual processing activities. Helping the researchers to achieve this feat was the availability of a genetically engineered mouse model in which the cells of the visual cortex are color labelled to distinguish blood vessels from neurons, and to show when neurons are active.

During their in-depth imaging experiments, the MIT researchers found that each of the visual cortex’s six layers exhibited different responses to incoming visual information. One of the team’s most fascinating discoveries is that neurons residing on the subplate are actually quite active in adult animals. It had been assumed that these subplate neurons were active only during development. Their role in mature animals is now an open question for further study.

Sur often likens the work in his neuroscience lab to astronomers and their perpetual quest to see further into the cosmos—but his goal is to see ever deeper into the brain. His group, along with many other researchers supported by the BRAIN Initiative, are indeed proving themselves to be biological explorers of the first order.

Reference:

[1] Functional imaging of visual cortical layers and subplate in awake mice with optimized three-photon microscopy. Yildirim M, Sugihara H, So PTC, Sur M. Nat Commun. 2019 Jan 11;10(1):177.

Links:

Sur Lab (Massachusetts Institute of Technology, Cambridge)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Eye Institute; National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering


Largest-Ever Alzheimer’s Gene Study Brings New Answers

Posted on by

Alzheimer's Risk Genes

Predicting whether someone will get Alzheimer’s disease (AD) late in life, and how to use that information for prevention, has been an intense focus of biomedical research. The goal of this work is to learn not only about the genes involved in AD, but how they work together and with other complex biological, environmental, and lifestyle factors to drive this devastating neurological disease.

It’s good news to be able to report that an international team of researchers, partly funded by NIH, has made more progress in explaining the genetic component of AD. Their analysis, involving data from more than 35,000 individuals with late-onset AD, has identified variants in five new genes that put people at greater risk of AD [1]. It also points to molecular pathways involved in AD as possible avenues for prevention, and offers further confirmation of 20 other genes that had been implicated previously in AD.

The results of this largest-ever genomic study of AD suggests key roles for genes involved in the processing of beta-amyloid peptides, which form plaques in the brain recognized as an important early indicator of AD. They also offer the first evidence for a genetic link to proteins that bind tau, the protein responsible for telltale tangles in the AD brain that track closely with a person’s cognitive decline.

The new findings are the latest from the International Genomics of Alzheimer’s Project (IGAP) consortium, led by a large, collaborative team including Brian Kunkle and Margaret Pericak-Vance, University of Miami Miller School of Medicine, Miami, FL. The effort, spanning four consortia focused on AD in the United States and Europe, was launched in 2011 with the aim of discovering and mapping all the genes that contribute to AD.

An earlier IGAP study including about 25,500 people with late-onset AD identified 20 common gene variants that influence a person’s risk for developing AD late in life [2]. While that was terrific progress to be sure, the analysis also showed that those gene variants could explain only a third of the genetic component of AD. It was clear more genes with ties to AD were yet to be found.

So, in the study reported in Nature Genetics, the researchers expanded the search. While so-called genome-wide association studies (GWAS) are generally useful in identifying gene variants that turn up often in association with particular diseases or other traits, the ones that arise more rarely require much larger sample sizes to find.

To increase their odds of finding additional variants, the researchers analyzed genomic data for more than 94,000 individuals, including more than 35,000 with a diagnosis of late-onset AD and another 60,000 older people without AD. Their search led them to variants in five additional genes, named IQCK, ACE, ADAM10, ADAMTS1, and WWOX, associated with late-onset AD that hadn’t turned up in the previous study.

Further analysis of those genes supports a view of AD in which groups of genes work together to influence risk and disease progression. In addition to some genes influencing the processing of beta-amyloid peptides and accumulation of tau proteins, others appear to contribute to AD via certain aspects of the immune system and lipid metabolism.

Each of these newly discovered variants contributes only a small amount of increased risk, and therefore probably have limited value in predicting an average person’s risk of developing AD later in life. But they are invaluable when it comes to advancing our understanding of AD’s biological underpinnings and pointing the way to potentially new treatment approaches. For instance, these new data highlight intriguing similarities between early-onset and late-onset AD, suggesting that treatments developed for people with the early-onset form also might prove beneficial for people with the more common late-onset disease.

It’s worth noting that the new findings continue to suggest that the search is not yet over—many more as-yet undiscovered rare variants likely play a role in AD. The search for answers to AD and so many other complex health conditions—assisted through collaborative data sharing efforts such as this one—continues at an accelerating pace.

References:

[1] Genetic meta-analysis of diagnosed Alzheimer’s disease identifies new risk loci and implicates Aβ, tau, immunity and lipid processing. Kunkle BW, Grenier-Boley B, Sims R, Bis JC, et. al. Nat Genet. 2019 Mar;51(3):414-430.

[2] Meta-analysis of 74,046 individuals identifies 11 new susceptibility loci for Alzheimer’s disease. Lambert JC, Ibrahim-Verbaas CA, Harold D, Naj AC, Sims R, Bellenguez C, DeStafano AL, Bis JC, et al. Nat Genet. 2013 Dec;45(12):1452-8.

Links:

Alzheimer’s Disease Genetics Fact Sheet (National Institute on Aging/NIH)

Genome-Wide Association Studies (NIH)

Margaret Pericak-Vance (University of Miami Health System, FL)

NIH Support: National Institute on Aging; National Heart, Lung, and Blood Institute; National Human Genome Research Institute; National Institute of Allergy and Infectious Diseases; Eunice Kennedy Shriver National Institute of Child Health and Human Development; National Institute of Diabetes and Digestive and Kidney Disease; National Institute of Neurological Disorders and Stroke


Can a Mind-Reading Computer Speak for Those Who Cannot?

Posted on by

Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York

Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.

Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.

When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.

Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.

As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech [1]. To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.

In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.

From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.

Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.

In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.

The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!

Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.

Reference:

[1] Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.

Links:

Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.

Nima Mesgarani (Columbia University, New York)

NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health


Skin Cells Can Be Reprogrammed In Vivo

Posted on by

Daniel Gallego-Perez
Credit: The Ohio State University College of Medicine, Columbus

Thousands of Americans are rushed to the hospital each day with traumatic injuries. Daniel Gallego-Perez hopes that small chips similar to the one that he’s touching with a metal stylus in this photo will one day be a part of their recovery process.

The chip, about one square centimeter in size, includes an array of tiny channels with the potential to regenerate damaged tissue in people. Gallego-Perez, a researcher at The Ohio State University Colleges of Medicine and Engineering, Columbus, has received a 2018 NIH Director’s New Innovator Award to develop the chip to reprogram skin and other cells to become other types of tissue needed for healing. The reprogrammed cells then could regenerate and restore injured neural or vascular tissue right where it’s needed.

Gallego-Perez and his Ohio State colleagues wondered if it was possible to engineer a device placed on the skin that’s capable of delivering reprogramming factors directly into cells, eliminating the need for the viral delivery vectors now used in such work. While such a goal might sound futuristic, Gallego-Perez and colleagues offered proof-of-principle last year in Nature Nanotechnology that such a chip can reprogram skin cells in mice. [1]

Here’s how it works: First, the chip’s channels are loaded with specific reprogramming factors, including DNA or proteins, and then the chip is placed on the skin. A small electrical current zaps the chip’s channels, driving reprogramming factors through cell membranes and into cells. The process, called tissue nanotransfection (TNT), is finished in milliseconds.

To see if the chips could help heal injuries, researchers used them to reprogram skin cells into vascular cells in mice. Not only did the technology regenerate blood vessels and restore blood flow to injured legs, the animals regained use of those limbs within two weeks of treatment.

The researchers then went on to show that they could use the chips to reprogram mouse skin cells into neural tissue. When proteins secreted by those reprogrammed skin cells were injected into mice with brain injuries, it helped them recover.

In the newly funded work, Gallego-Perez wants to take the approach one step further. His team will use the chip to reprogram harder-to-reach tissues within the body, including peripheral nerves and the brain. The hope is that the device will reprogram cells surrounding an injury, even including scar tissue, and “repurpose” them to encourage nerve repair and regeneration. Such an approach may help people who’ve suffered a stroke or traumatic nerve injury.

If all goes well, this TNT method could one day fill an important niche in emergency medicine. Gallego-Perez’s work is also a fine example of just one of the many amazing ideas now being pursued in the emerging field of regenerative medicine.

Reference:

[1] Topical tissue nano-transfection mediates non-viral stroma reprogramming and rescue. Gallego-Perez D, Pal D, Ghatak S, Malkoc V, Higuita-Castro N, Gnyawali S, Chang L, Liao WC, Shi J, Sinha M, Singh K, Steen E, Sunyecz A, Stewart R, Moore J, Ziebro T, Northcutt RG, Homsy M, Bertani P, Lu W, Roy S, Khanna S, Rink C, Sundaresan VB, Otero JJ, Lee LJ, Sen CK. Nat Nanotechnol. 2017 Oct;12(10):974-979.

Links:

Stroke Information (National Institute of Neurological Disorders and Stroke/NIH)

Burns and Traumatic Injury (NIH)

Peripheral Neuropathy (National Institute of Neurological Disorders and Stroke/NIH)

Video: Breakthrough Device Heals Organs with a Single Touch (YouTube)

Gallego-Perez Lab (The Ohio State University College of Medicine, Columbus)

Gallego-Perez Project Information (NIH RePORTER)

NIH Support: Common Fund; National Institute of Neurological Disorders and Stroke


Discovering a Source of Laughter in the Brain

Posted on by

cingulum bundle
Illustration showing how an electrode was inserted into the cingulum bundle. Courtesy of American Society for Clinical Investigation

If laughter really is the best medicine, wouldn’t it be great if we could learn more about what goes on in the brain when we laugh? Neuroscientists recently made some major progress on this front by pinpointing a part of the brain that, when stimulated, never fails to induce smiles and laughter.

In their study conducted in three patients undergoing electrical stimulation brain mapping as part of epilepsy treatment, the NIH-funded team found that stimulation of a specific tract of neural fibers, called the cingulum bundle, triggered laughter, smiles, and a sense of calm. Not only do the findings shed new light on the biology of laughter, researchers hope they may also lead to new strategies for treating a range of conditions, including anxiety, depression, and chronic pain.

In people with epilepsy whose seizures are poorly controlled with medication, surgery to remove seizure-inducing brain tissue sometimes helps. People awaiting such surgeries must first undergo a procedure known as intracranial electroencephalography (iEEG). This involves temporarily placing 10 to 20 arrays of tiny electrodes in the brain for up to several weeks, in order to pinpoint the source of a patient’s seizures in the brain. With the patient’s permission, those electrodes can also enable physician-researchers to stimulate various regions of the patient’s brain to map their functions and make potentially new and unexpected discoveries.

In the new study, published in The Journal of Clinical Investigation, Jon T. Willie, Kelly Bijanki, and their colleagues at Emory University School of Medicine, Atlanta, looked at a 23-year-old undergoing iEEG for 8 weeks in preparation for surgery to treat her uncontrolled epilepsy [1]. One of the electrodes implanted in her brain was located within the cingulum bundle and, when that area was stimulated for research purposes, the woman experienced an uncontrollable urge to laugh. Not only was the woman given to smiles and giggles, she also reported feeling relaxed and calm.

As a further and more objective test of her mood, the researchers asked the woman to interpret the expression of faces on a computer screen as happy, sad, or neutral. Electrical stimulation to the cingulum bundle led her to see those faces as happier, a sign of a generally more positive mood. A full evaluation of her mental state also showed she was fully aware and alert.

To confirm the findings, the researchers looked to two other patients, a 40-year-old man and a 28-year-old woman, both undergoing iEEG in the course of epilepsy treatment. In those two volunteers, stimulation of the cingulum bundle also triggered laughter and reduced anxiety with otherwise normal cognition.

Willie notes that the cingulum bundle links many brain areas together. He likens it to a super highway with lots of on and off ramps. He suspects the spot they’ve uncovered lies at a key intersection, providing access to various brain networks regulating mood, emotion, and social interaction.

Previous research has shown that stimulation of other parts of the brain can also prompt patients to laugh. However, what makes stimulation of the cingulum bundle a particularly promising approach is that it not only triggers laughter, but also reduces anxiety.

The new findings suggest that stimulation of the cingulum bundle may be useful for calming patients’ anxieties during neurosurgeries in which they must remain awake. In fact, Willie’s team did so during their 23-year-old woman’s subsequent epilepsy surgery. Each time she became distressed, the stimulation provided immediate relief. Also, if traditional deep brain stimulation or less invasive means of brain stimulation can be developed and found to be safe for long-term use, they may offer new ways to treat depression, anxiety disorders, and/or chronic pain.

Meanwhile, Willie’s team is hard at work using similar approaches to map brain areas involved in other aspects of mood, including fear, sadness, and anxiety. Together with the multidisciplinary work being mounted by the NIH-led BRAIN Initiative, these kinds of studies promise to reveal functionalities of the human brain that have previously been out of reach, with profound consequences for neuroscience and human medicine.

Reference:

[1] Cingulum stimulation enhances positive affect and anxiolysis to facilitate awake craniotomy. Bijanki KR, Manns JR, Inman CS, Choi KS, Harati S, Pedersen NP, Drane DL, Waters AC, Fasano RE, Mayberg HS, Willie JT. J Clin Invest. 2018 Dec 27.

Links:

Video: Patient’s Response (Bijanki et al. The Journal of Clinical Investigation)

Epilepsy Information Page (National Institute of Neurological Disease and Stroke/NIH)

Jon T. Willie (Emory University, Atlanta, GA)

NIH Support: National Institute of Neurological Disease and Stroke; National Center for Advancing Translational Sciences


Next Page