Skip to main content

visual cortex

Finding Better Ways to Image the Retina

Posted on by

Two light microscopy fields of the retina showing small blue dots (rods) surrounding larger yellow dots (cones)
Credit: Johnny Tam, National Eye Institute, NIH

Every day, all around the world, eye care professionals are busy performing dilated eye exams. By looking through a patient’s widened pupil, they can view the retina—the postage stamp-sized tissue lining the back of the inner eye—and look for irregularities that may signal the development of vision loss.

The great news is that, thanks to research, retinal imaging just keeps getting better and better. The images above, which show the same cells viewed with two different microscopic techniques, provide good examples of how tweaking existing approaches can significantly improve our ability to visualize the retina’s two types of light-sensitive neurons: rod and cone cells.

Specifically, these images show an area of the outer retina, which is the part of the tissue that’s observed during a dilated eye exam. Thanks to colorization and other techniques, a viewer can readily distinguish between the light-sensing, color-detecting cone cells (orange) and the much smaller, lowlight-sensing rod cells (blue).

These high-res images come from Johnny Tam, a researcher with NIH’s National Eye Institute. Working with Alfredo Dubra, Stanford University, Palo Alto, CA, Tam and his team figured out how to limit light distortion of the rod cells. The key was illuminating the eye using less light, provided as a halo instead of the usual solid, circular beam.

But the researchers’ solution hit a temporary snag when the halo reflected from the rods and cones created another undesirable ring of light. To block it out, Tam’s team introduced a tiny pinhole, called a sub-Airy disk. Along with use of adaptive optics technology [1] to correct for other distortions of light, the scientists were excited to see such a clear view of individual rods and cones. They published their findings recently in the journal Optica [2]

The resolution produced using these techniques is so much improved (33 percent better than with current methods) that it’s even possible to visualize the tiny inner segments of both rods and cones. In the cones, for example, these inner segments help direct light coming into the eye to other, photosensitive parts that absorb single photons of light. The light is then converted into electrical signals that stream to the brain’s visual centers in the occipital cortex, which makes it possible for us to experience vision.

Tam and team are currently working with physician-scientists in the NIH Clinical Center to image the retinas of people with a variety of retinal diseases, including age-related macular degeneration (AMD), a leading cause of vision loss in older adults. These research studies are ongoing, but offer hopeful possibilities for safe and non-intrusive monitoring of individual rods and cones over time, as well as across disease types. That’s obviously good news for patients. Plus it will help scientists understand how a rod or cone cell stops working, as well as more precisely test the effects of gene therapy and other experimental treatments aimed at restoring vision.


[1] Noninvasive imaging of the human rod photoreceptor mosaic using a confocal adaptive optics scanning ophthalmoscope. Dubra A, Sulai Y, Norris JL, Cooper RF, Dubis AM, Williams DR, Carroll J. Biomed Opt Express. 2011 Jul 1;2(7):1864-76.

[1] In-vivo sub-diffraction adaptive optics imaging of photoreceptors in the human eye with annular pupil illumination and sub-Airy detection. Rongwen L, Aguilera N, Liu T, Liu J, Giannini JP, Li J, Bower AJ, Dubra A, Tam J. Optica 2021 8, 333-343.


Get a Dilated Eye Exam (National Eye Institute/NIH)

How the Eyes Work (NEI)

Eye Health Data and Statistics (NEI)

Tam Lab (NEI)

Dubra Lab (Stanford University, Palo Alto, CA)

NIH Support: National Eye Institute

Defining Neurons in Technicolor

Posted on by

Brain Architecture
Credit: Allen Institute for Brain Science, Seattle

Can you identify a familiar pattern in this image’s square grid? Yes, it’s the outline of the periodic table! But instead of organizing chemical elements, this periodic table sorts 46 different types of neurons present in the visual cortex of a mouse brain.

Scientists, led by Hongkui Zeng at the Allen Institute for Brain Science, Seattle, constructed this periodic table by assigning colors to their neuronal discoveries based upon their main cell functions [1]. Cells in pinks, violets, reds, and oranges have inhibitory electrical activity, while those in greens and blues have excitatory electrical activity.

For any given cell, the darker colors indicate dendrites, which receive signals from other neurons. The lighter colors indicate axons, which transmit signals. Examples of electrical properties—the number and intensity of their “spikes”—appear along the edges of the table near the bottom.

To create this visually arresting image, Zeng’s NIH-supported team injected dye-containing probes into neurons. The probes are engineered to carry genes that make certain types of neurons glow bright colors under the microscope.

This allowed the researchers to examine a tiny slice of brain tissue and view each colored neuron’s shape, as well as measure its electrical response. They followed up with computational tools to combine these two characteristics and classify cell types based on their shape and electrical activity. Zeng’s team could then sort the cells into clusters using a computer algorithm to avoid potential human bias from visually interpreting the data.

Why compile such a detailed atlas of neuronal subtypes? Although scientists have been surveying cells since the invention of the microscope centuries ago, there is still no consensus on what a “cell type” is. Large, rich datasets like this atlas contain massive amounts of information to characterize individual cells well beyond their appearance under a microscope, helping to explain factors that make cells similar or dissimilar. Those differences may not be apparent to the naked eye.

Just last year, Allen Institute researchers conducted similar work by categorizing nearly 24,000 cells from the brain’s visual and motor cortex into different types based upon their gene activity [2]. The latest research lines up well with the cell subclasses and types categorized in the previous gene-activity work. As a result, the scientists have more evidence that each of the 46 cell types is actually distinct from the others and likely drives a particular function within the visual cortex.

Publicly available resources, like this database of cell types, fuel much more discovery. Scientists all over the world can look at this table (and soon, more atlases from other parts of the brain) to see where a cell type fits into a region of interest and how it might behave in a range of brain conditions.


[1] Classification of electrophysiological and morphological neuron types in the mouse visual cortex. N Gouwens NW, et al. Neurosci. 2019 Jul;22(7):1182-1195.

[2] Shared and distinct transcriptomic cell types across neocortical areas. Tasic B, et al. Nature. 2018 Nov;563(7729):72-78.


Brain Basics: The Life and Death of a Neuron (National Institute of Neurological Disorders and Stroke/NIH)

Cell Types: Overview of the Data (Allen Brain Atlas/Allen Institute for Brain Science, Seattle)

Hongkui Zeng (Allen Institute)

NIH Support: National Institute of Mental Health; Eunice Kennedy Shriver National Institute of Child Health & Human Development

Taking Brain Imaging Even Deeper

Posted on by

Thanks to yet another amazing advance made possible by the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, I can now take you on a 3D fly-through of all six layers of the part of the mammalian brain that processes external signals into vision. This unprecedented view is made possible by three-photon microscopy, a low-energy imaging approach that is allowing researchers to peer deeply within the brains of living creatures without damaging or killing their brain cells.

The basic idea of multi-photon microscopy is this: for fluorescence microscopy to work, you want to deliver a specific energy level of photons (usually with a laser) to excite a fluorescent molecule, so that it will emit light at a slightly lower energy (longer wavelength) and be visualized as a burst of colored light in the microscope. That’s how fluorescence works. Green fluorescent protein (GFP) is one of many proteins that can be engineered into cells or mice to make that possible.

But for that version of the approach to work on tissue, the excited photons need to penetrate deeply, and that’s not possible for such high energy photons. So two-photon strategies were developed, where it takes the sum of the energy of two simultaneous photons to hit the target in order to activate the fluorophore.

That approach has made a big difference, but for deep tissue penetration the photons are still too high in energy. Enter the three-photon version! Now the even lower energy of the photons makes tissue more optically transparent, though for activation of the fluorescent protein, three photons have to hit it simultaneously. But that’s part of the beauty of the system—the visual “noise” also goes down.

This particular video shows what takes place in the visual cortex of mice when objects pass before their eyes. As the objects appear, specific neurons (green) are activated to process the incoming information. Nearby, and slightly obscuring the view, are the blood vessels (pink, violet) that nourish the brain. At 33 seconds into the video, you can see the neurons’ myelin sheaths (pink) branching into the white matter of the brain’s subplate, which plays a key role in organizing the visual cortex during development.

This video comes from a recent paper in Nature Communications by a team from Massachusetts Institute of Technology, Cambridge [1]. To obtain this pioneering view of the brain, Mriganka Sur, Murat Yildirim, and their colleagues built an innovative microscope that emits three low-energy photons. After carefully optimizing the system, they were able to peer more than 1,000 microns (0.05 inches) deep into the visual cortex of a live, alert mouse, far surpassing the imaging capacity of standard one-photon microscopy (100 microns) and two-photon microscopy (400-500 microns).

This improved imaging depth allowed the team to plumb all six layers of the visual cortex (two-photon microscopy tops out at about three layers), as well as to record in real time the brain’s visual processing activities. Helping the researchers to achieve this feat was the availability of a genetically engineered mouse model in which the cells of the visual cortex are color labelled to distinguish blood vessels from neurons, and to show when neurons are active.

During their in-depth imaging experiments, the MIT researchers found that each of the visual cortex’s six layers exhibited different responses to incoming visual information. One of the team’s most fascinating discoveries is that neurons residing on the subplate are actually quite active in adult animals. It had been assumed that these subplate neurons were active only during development. Their role in mature animals is now an open question for further study.

Sur often likens the work in his neuroscience lab to astronomers and their perpetual quest to see further into the cosmos—but his goal is to see ever deeper into the brain. His group, along with many other researchers supported by the BRAIN Initiative, are indeed proving themselves to be biological explorers of the first order.


[1] Functional imaging of visual cortical layers and subplate in awake mice with optimized three-photon microscopy. Yildirim M, Sugihara H, So PTC, Sur M. Nat Commun. 2019 Jan 11;10(1):177.


Sur Lab (Massachusetts Institute of Technology, Cambridge)

The Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative (NIH)

NIH Support: National Eye Institute; National Institute of Neurological Disorders and Stroke; National Institute of Biomedical Imaging and Bioengineering

Vision Loss Boosts Auditory Perception

Posted on by

Image of green specks with blobs of blue centered around a large red blob with tentacles

Caption: A neuron (red) in the auditory cortex of a mouse brain receives input from axons projecting from the thalamus (green). Also shown are the nuclei (blue) of other cells.
Credit: Emily Petrus, Johns Hopkins University, Baltimore

Many people with vision loss—including such gifted musicians as the late Doc Watson (my favorite guitar picker), Stevie Wonder, Andrea Bocelli, and the Blind Boys of Alabama—are thought to have supersensitive hearing. They are often much better at discriminating pitch, locating the origin of sounds, and hearing softer tones than people who can see. Now, a new animal study suggests that even a relatively brief period of simulated blindness may have the power to enhance hearing among those with normal vision.

In the study, NIH-funded researchers at the University of Maryland in College Park, and Johns Hopkins University in Baltimore, found that when they kept adult mice in complete darkness for one week, the animals’ ability to hear significantly improved [1]. What’s more, when they examined the animals’ brains, the researchers detected changes in the connections among neurons in the part of the brain where sound is processed, the auditory cortex.