Nelson Mandela said, “Education is the most powerful weapon which you can use to change the world.” At NIH’s National Institute of General Medical Sciences (NIGMS), we believe that educating future and current scientists from diverse backgrounds benefits the entire biomedical research enterprise, changing the world through advances in disease diagnosis, treatment, and prevention.
As the summer winds down and students and educators embark on a new school year, I thought I’d highlight some of our educational resources that complement science, technology, engineering, and math (STEM) curricula. I’d also like to draw your attention to training programs designed to inspire and support research careers.
STEM Programs and Resources from NIH
The NIGMS Science Education Partnership Awards (SEPAs) are resources that provide opportunities for pre-K-12 students from underserved communities to access STEM educational resources. It lets them aspire to careers in health research.
The SEPA grants in almost every state support innovative, research-based, science education programs, furthering NIGMS’ mission to ensure a strong and diverse research ecosystem. Resources generated through SEPAs are free, mapped to state and national teaching standards for STEM, and rigorously evaluated for effectiveness. These resources include mobile laboratories, health exhibits in museums and science centers, educational resources for students, and professional development for teachers.
One SEPA program at Purdue University College of Veterinary Medicine, West Lafayette, IN, pairs veterinarians from their nationwide “superhero” League of VetaHumanz with local schools or community centers that support underserved students. These professional veterinarians, also from diverse backgrounds, strive to help young students from underrepresented groups envision future careers caring for animals.
Another SEPA program at Baylor University, Waco, TX, is increasing access to chemistry labs for high schoolers with blindness. It uses a robotic reactor with enhanced safety features to eliminate many dangers of synthetic organic chemistry. Students with blindness can control the robot to conduct experiments in a similar fashion to their sighted counterparts. The robot is housed within an airtight, blast-proof glove box, and it can perform common chemistry operations such as weighing and dispensing solid or liquid reagents; delivering solvents; combining reagents with the solvents; and stirring, heating, or cooling the reaction mixtures.
As noted in the 2021 report from the White House’s Office of Science and Technology Policy, “equity and inclusion are fundamental prerequisites for making high-quality STEM education accessible to all Americans and will maximize the creative capacity of tomorrow’s workforce.” I believe this statement falls right in line with the spirit of SEPAs.
New NIH-Wide STEM Teaching Resources Website
To help educators find free science education content, we recently launched a STEM teaching resources website. It includes NIH-wide teaching materials as well as those from SEPA programs for grades K-12, categorized by different health and research topic areas.
The NIGMS free educational resource Pathways, designed for educators and aspiring scientists in grades 6-12, is one of many resources available through the STEM website. Each issue of Pathways provides information about basic biomedical science and research careers and includes a student magazine, teacher lesson plans, and interactives such as Kahoot! classroom quizzes. Our most recent vaccine science issue teaches students how COVID-19 vaccines work in the body and introduces them to scientists dedicated to vaccine research.
Programs for Early Career Scientists
While SEPA grants focus on future scientists (and their educators) in grades pre-K-12, NIGMS also has a robust research training portfolio for those at the undergraduate through postdoctoral and professional levels. These programs aim to enhance diversity by engaging and training scientists from diverse backgrounds early in their careers.
At the undergraduate level, programs like Maximizing Access to Research Careers (MARC) provide students from diverse backgrounds with mentorship and career development. We recently highlighted the MARC program at Vanderbilt University, Nashville, TN, on our Biomedical Beat blog showing the program’s impact on students.
At the other end of the spectrum, our Maximizing Opportunities for Scientific and Academic Independent Careers (MOSAIC) program helps promising postdoctoral researchers from diverse backgrounds transition into independent faculty careers. The MOSAIC scholars become part of a career development program to expand their professional networks and gain additional skills and mentoring through scientific societies. You can learn more about each of these impressive early career scientists on our MOSAIC Scholars webpages.
At NIGMS, we’re dedicated to increasing the diversity of the biomedical research workforce. Through STEM content and outreach, as well as scientist training resources, we focus on emphasizing diversity, equity, inclusion, and accessibility. This holds true with funding and programming for current scientists, and in the inspiration and training of future scientists.
SEPA Award (National Institute of General Medical Sciences/NIH)
The League of VetaHumanz: Encouraging Kids to Use Their Powers for Good! (Biomedical Beat Blog/NIGMS)
Catching Up With ReMARCable Vanderbilt Graduates (Biomedical Beat Blog/NIGMS)
Note: Dr. Lawrence Tabak, who performs the duties of the NIH Director, has asked the heads of NIH’s Institutes and Centers (ICs) to contribute occasional guest posts to the blog to highlight some of the interesting science that they support and conduct. This is the 15th in the series of NIH IC guest posts that will run until a new permanent NIH director is in place.
Posted on by Michael F. Chiang, M.D., National Eye Institute
One of many health risks premature infants face is retinopathy of prematurity (ROP), a leading cause of childhood blindness worldwide. ROP causes abnormal blood vessel growth in the light-sensing eye tissue called the retina. Left untreated, ROP can lead to lead to scarring, retinal detachment, and blindness. It’s the disease that caused singer and songwriter Stevie Wonder to lose his vision.
Now, effective treatments are available—if the disease is diagnosed early and accurately. Advancements in neonatal care have led to the survival of extremely premature infants, who are at highest risk for severe ROP. Despite major advancements in diagnosis and treatment, tragically, about 600 infants in the U.S. still go blind each year from ROP. This disease is difficult to diagnose and manage, even for the most experienced ophthalmologists. And the challenges are much worse in remote corners of the world that have limited access to ophthalmic and neonatal care.
Artificial intelligence (AI) is helping bridge these gaps. Prior to my tenure as National Eye Institute (NEI) director, I helped develop a system called i-ROP Deep Learning (i-ROP DL), which automates the identification of ROP. In essence, we trained a computer to identify subtle abnormalities in retinal blood vessels from thousands of images of premature infant retinas. Strikingly, the i-ROP DL artificial intelligence system outperformed even international ROP experts . This has enormous potential to improve the quality and delivery of eye care to premature infants worldwide.
Of course, the promise of medical artificial intelligence extends far beyond ROP. In 2018, the FDA approved the first autonomous AI-based diagnostic tool in any field of medicine . Called IDx-DR, the system streamlines screening for diabetic retinopathy (DR), and its results require no interpretation by a doctor. DR occurs when blood vessels in the retina grow irregularly, bleed, and potentially cause blindness. About 34 million people in the U.S. have diabetes, and each is at risk for DR.
As with ROP, early diagnosis and intervention is crucial to preventing vision loss to DR. The American Diabetes Association recommends people with diabetes see an eye care provider annually to have their retinas examined for signs of DR. Yet fewer than 50 percent of Americans with diabetes receive these annual eye exams.
The IDx-DR system was conceived by Michael Abramoff, an ophthalmologist and AI expert at the University of Iowa, Iowa City. With NEI funding, Abramoff used deep learning to design a system for use in a primary-care medical setting. A technician with minimal ophthalmology training can use the IDx-DR system to scan a patient’s retinas and get results indicating whether a patient should be sent to an eye specialist for follow-up evaluation or to return for another scan in 12 months.
Many other methodological innovations in AI have occurred in ophthalmology. That’s because imaging is so crucial to disease diagnosis and clinical outcome data are so readily available. As a result, AI-based diagnostic systems are in development for many other eye diseases, including cataract, age-related macular degeneration (AMD), and glaucoma.
Rapid advances in AI are occurring in other medical fields, such as radiology, cardiology, and dermatology. But disease diagnosis is just one of many applications for AI. Neurobiologists are using AI to answer questions about retinal and brain circuitry, disease modeling, microsurgical devices, and drug discovery.
If it sounds too good to be true, it may be. There’s a lot of work that remains to be done. Significant challenges to AI utilization in science and medicine persist. For example, researchers from the University of Washington, Seattle, last year tested seven AI-based screening algorithms that were designed to detect DR. They found under real-world conditions that only one outperformed human screeners . A key problem is these AI algorithms need to be trained with more diverse images and data, including a wider range of races, ethnicities, and populations—as well as different types of cameras.
How do we address these gaps in knowledge? We’ll need larger datasets, a collaborative culture of sharing data and software libraries, broader validation studies, and algorithms to address health inequities and to avoid bias. The NIH Common Fund’s Bridge to Artificial Intelligence (Bridge2AI) project and NIH’s Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Program project will be major steps toward addressing those gaps.
So, yes—AI is getting smarter. But harnessing its full power will rely on scientists and clinicians getting smarter, too.
 Automated diagnosis of plus disease in retinopathy of prematurity using deep convolutional neural networks. Brown JM, Campbell JP, Beers A, Chang K, Ostmo S, Chan RVP, Dy J, Erdogmus D, Ioannidis S, Kalpathy-Cramer J, Chiang MF; Imaging and Informatics in Retinopathy of Prematurity (i-ROP) Research Consortium. JAMA Ophthalmol. 2018 Jul 1;136(7):803-810.
 FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. Food and Drug Administration. April 11, 2018.
 Multicenter, head-to-head, real-world validation study of seven automated artificial intelligence diabetic retinopathy screening systems. Lee AY, Yanagihara RT, Lee CS, Blazes M, Jung HC, Chee YE, Gencarella MD, Gee H, Maa AY, Cockerham GC, Lynch M, Boyko EJ. Diabetes Care. 2021 May;44(5):1168-1175.
Retinopathy of Prematurity (National Eye Institute/NIH)
Diabetic Eye Disease (NEI)
Michael Abramoff (University of Iowa, Iowa City)
Bridge to Artificial Intelligence (Common Fund/NIH)
[Note: Acting NIH Director Lawrence Tabak has asked the heads of NIH’s institutes and centers to contribute occasional guest posts to the blog as a way to highlight some of the cool science that they support and conduct. This is the second in the series of NIH institute and center guest posts that will run until a new permanent NIH director is in place.]
Posted on by Dr. Francis Collins
Each morning, more than 2 million Americans start their rise-and-shine routine by remembering to take their eye drops. The drops treat their open-angle glaucoma, the most-common form of the disease, caused by obstructed drainage of fluid where the eye’s cornea and iris meet. The slow drainage increases fluid pressure at the front of the eye. Meanwhile, at the back of the eye, fluid pushes on the optic nerve, causing its bundled fibers to fray and leading to gradual loss of side vision.
For many, the eye drops help to lower intraocular pressure and prevent vision loss. But for others, the drops aren’t sufficient and their intraocular pressure remains high. Such people will need next-level care, possibly including eye surgery, to reopen the clogged drainage ducts and slow this disease that disproportionately affects older adults and African Americans over age 40.
Sally Baxter, a physician-scientist with expertise in ophthalmology at the University of California, San Diego (UCSD), wants to learn how to predict who is at greatest risk for serious vision loss from open-angle and other forms of glaucoma. That way, they can receive more aggressive early care to protect their vision from this second-leading cause of blindness in the U.S..
To pursue this challenging research goal, Baxter has received a 2020 NIH Director’s Early Independence Award. Her research will build on the clinical observation that people with glaucoma frequently battle other chronic health problems, such as high blood pressure, diabetes, and heart disease. To learn more about how these and other chronic health conditions might influence glaucoma outcomes, Baxter has begun mining a rich source of data: electronic health records (EHRs).
In an earlier study of patients at UCSD, Baxter showed that EHR data helped to predict which people would need glaucoma surgery within the next six months . The finding suggested that the EHR, especially information on a patient’s blood pressure and medications, could predict the risk for worsening glaucoma.
In her NIH-supported work, she’s already extended this earlier “Big Data” finding by analyzing data from more than 1,200 people with glaucoma who participate in NIH’s All of Us Research Program . With consent from the participants, Baxter used their EHRs to train a computer to find telltale patterns within the data and then predict with 80 to 99 percent accuracy who would later require eye surgery.
The findings confirm that machine learning approaches and EHR data can indeed help in managing people with glaucoma. That’s true even when the EHR data don’t contain any information specific to a person’s eye health.
In fact, the work of Baxter and other groups have pointed to an especially important role for blood pressure in shaping glaucoma outcomes. Hoping to explore this lead further with the support of her Early Independence Award, Baxter also will enroll patients in a study to test whether blood-pressure monitoring smart watches can add important predictive information on glaucoma progression. By combining round-the-clock blood pressure data with EHR data, she hopes to predict glaucoma progression with even greater precision. She’s also exploring innovative ways to track whether people with glaucoma use their eye drops as prescribed, which is another important predictor of the risk of irreversible vision loss .
Glaucoma research continues to undergo great progress. This progress ranges from basic research to the development of new treatments and high-resolution imaging technologies to improve diagnostics. But Baxter’s quest to develop practical clinical tools hold great promise, too, and hopefully will help one day to protect the vision of millions of people with glaucoma around the world.
 Machine learning-based predictive modeling of surgical intervention in glaucoma using systemic data from electronic health records. Baxter SL, Marks C, Kuo TT, Ohno-Machado L, Weinreb RN. Am J Ophthalmol. 2019 Dec; 208:30-40.
 Predictive analytics for glaucoma using data from the All of Us Research Program. Baxter SL, Saseendrakumar BR, Paul P, Kim J, Bonomi L, Kuo TT, Loperena R, Ratsimbazafy F, Boerwinkle E, Cicek M, Clark CR, Cohn E, Gebo K, Mayo K, Mockrin S, Schully SD, Ramirez A, Ohno-Machado L; All of Us Research Program Investigators. Am J Ophthalmol. 2021 Jul;227:74-86.
 Smart electronic eyedrop bottle for unobtrusive monitoring of glaucoma medication adherence. Aguilar-Rivera M, Erudaitius DT, Wu VM, Tantiongloc JC, Kang DY, Coleman TP, Baxter SL, Weinreb RN. Sensors (Basel). 2020 Apr 30;20(9):2570.
Glaucoma (National Eye Institute/NIH)
Video: Sally Baxter (All of Us Research Program)
Sally Baxter (University of California San Diego)
Baxter Project Information (NIH RePORTER)
NIH Director’s Early Independence Award (Common Fund)
NIH Support: Common Fund
Posted on by Dr. Francis Collins
Recently, I’ve highlighted just a few of the many amazing advances coming out of the NIH-led Brain Research through Advancing Innovative Neurotechnologies® (BRAIN) Initiative. And for our grand finale, I’d like to share a cool video that reveals how this revolutionary effort to map the human brain is opening up potential plans to help people with disabilities, such as vision loss, that were once unimaginable.
This video, produced by Jordi Chanovas and narrated by Stephen Macknik, State University of New York Downstate Health Sciences University, Brooklyn, outlines a new strategy aimed at restoring loss of central vision in people with age-related macular degeneration (AMD), a leading cause of vision loss among people age 50 and older. The researchers’ ultimate goal is to give such people the ability to see the faces of their loved ones or possibly even read again.
In the innovative approach you see here, neuroscientists aren’t even trying to repair the part of the eye destroyed by AMD: the light-sensitive retina. Instead, they are attempting to recreate the light-recording function of the retina within the brain itself.
How is that possible? Normally, the retina streams visual information continuously to the brain’s primary visual cortex, which receives the information and processes it into the vision that allows you to read these words. In folks with AMD-related vision loss, even though many cells in the center of the retina have stopped streaming, the primary visual cortex remains fully functional to receive and process visual information.
About five years ago, Macknik and his collaborator Susana Martinez-Conde, also at Downstate, wondered whether it might be possible to circumvent the eyes and stream an alternative source of visual information to the brain’s primary visual cortex, thereby restoring vision in people with AMD. They sketched out some possibilities and settled on an innovative system that they call OBServ.
Among the vital components of this experimental system are tiny, implantable neuro-prosthetic recording devices. Created in the Macknik and Martinez-Conde labs, this 1-centimeter device is powered by induction coils similar to those in the cochlear implants used to help people with profound hearing loss. The researchers propose to surgically implant two of these devices in the rear of the brain, where they will orchestrate the visual process.
For technical reasons, the restoration of central vision will likely be partial, with the window of vision spanning only about the size of one-third of an adult thumbnail held at arm’s length. But researchers think that would be enough central vision for people with AMD to regain some of their lost independence.
As demonstrated in this video from the BRAIN Initiative’s “Show Us Your Brain!” contest, here’s how researchers envision the system would ultimately work:
• A person with vision loss puts on a specially designed set of glasses. Each lens contains two cameras: one to record visual information in the person’s field of vision; the other to track that person’s eye movements enabled by residual peripheral vision.
• The eyeglass cameras wirelessly stream the visual information they have recorded to two neuro-prosthetic devices implanted in the rear of the brain.
• The neuro-prosthetic devices process and project this information onto a specific set of excitatory neurons in the brain’s hard-wired visual pathway. Researchers have previously used genetic engineering to turn these neurons into surrogate photoreceptor cells, which function much like those in the eye’s retina.
• The surrogate photoreceptor cells in the brain relay visual information to the primary visual cortex for processing.
• All the while, the neuro-prosthetic devices perform quality control of the visual signals, calibrating them to optimize their contrast and clarity.
While this might sound like the stuff of science-fiction (and this actual application still lies several years in the future), the OBServ project is now actually conceivable thanks to decades of advances in the fields of neuroscience, vision, bioengineering, and bioinformatics research. All this hard work has made the primary visual cortex, with its switchboard-like wiring system, among the brain’s best-understood regions.
OBServ also has implications that extend far beyond vision loss. This project provides hope that once other parts of the brain are fully mapped, it may be possible to design equally innovative systems to help make life easier for people with other disabilities and conditions.
Age-Related Macular Degeneration (National Eye Institute/NIH)
Macknik Lab (SUNY Downstate Health Sciences University, Brooklyn)
Martinez-Conde Laboratory (SUNY Downstate Health Sciences University)
Show Us Your Brain! (BRAIN Initiative/NIH)
NIH Support: BRAIN Initiative