Skip to main content

artificial intelligence

From Brain Waves to Real-Time Text Messaging

Posted on by

For people who have lost the ability to speak due to a severe disability, they want to get the words out. They just can’t physically do it. But in our digital age, there is now a fascinating way to overcome such profound physical limitations. Computers are being taught to decode brain waves as a person tries to speak and then interactively translate them onto a computer screen in real time.

The latest progress, demonstrated in the video above, establishes that it’s quite possible for computers trained with the help of current artificial intelligence (AI) methods to restore a vocabulary of more than a 1,000 words for people with the mental but not physical ability to speak. That covers more than 85 percent of most day-to-day communication in English. With further refinements, the researchers say a 9,000-word vocabulary is well within reach.

The findings published in the journal Nature Communications come from a team led by Edward Chang, University of California, San Francisco [1]. Earlier, Chang and colleagues established that this AI-enabled system could directly decode 50 full words in real time from brain waves alone in a person with paralysis trying to speak [2]. The study is known as BRAVO, short for Brain-computer interface Restoration Of Arm and Voice.

In the latest BRAVO study, the team wanted to figure out how to condense the English language into compact units for easier decoding and expand that 50-word vocabulary. They did it in the same way we all do: by focusing not on complete words, but on the 26-letter alphabet.

The study involved a 36-year-old male with severe limb and vocal paralysis. The team designed a sentence-spelling pipeline for this individual, which enabled him to silently spell out messages using code words corresponding to each of the 26 letters in his head. As he did so, a high-density array of electrodes implanted over the brain’s sensorimotor cortex, part of the cerebral cortex, recorded his brain waves.

A sophisticated system including signal processing, speech detection, word classification, and language modeling then translated those thoughts into coherent words and complete sentences on a computer screen. This so-called speech neuroprosthesis system allows those who have lost their speech to perform roughly the equivalent of text messaging.

Chang’s team put their spelling system to the test first by asking the participant to silently reproduce a sentence displayed on a screen. They then moved on to conversations, in which the participant was asked a question and could answer freely. For instance, as in the video above, when the computer asked, “How are you today?” he responded, “I am very good.” When asked about his favorite time of year, he answered, “summertime.” An attempted hand movement signaled the computer when he was done speaking.

The computer didn’t get it exactly right every time. For instance, in the initial trials with the target sentence, “good morning,” the computer got it exactly right in one case and in another came up with “good for legs.” But, overall, their tests show that their AI device could decode with a high degree of accuracy silently spoken letters to produce sentences from a 1,152-word vocabulary at a speed of about 29 characters per minute.

On average, the spelling system got it wrong 6 percent of the time. That’s really good when you consider how common it is for errors to arise with dictation software or in any text message conversation.

Of course, much more work is needed to test this approach in many more people. They don’t yet know how individual differences or specific medical conditions might affect the outcomes. They suspect that this general approach will work for anyone so long as they remain mentally capable of thinking through and attempting to speak.

They also envision future improvements as part of their BRAVO study. For instance, it may be possible to develop a system capable of more rapid decoding of many commonly used words or phrases. Such a system could then reserve the slower spelling method for other, less common words.

But, as these results clearly demonstrate, this combination of artificial intelligence and silently controlled speech neuroprostheses to restore not just speech but meaningful communication and authentic connection between individuals who’ve lost the ability to speak and their loved ones holds fantastic potential. For that, I say BRAVO.

References:

[1] Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis. Metzger SL, Liu JR, Moses DA, Dougherty ME, Seaton MP, Littlejohn KT, Chartier J, Anumanchipalli GK, Tu-CHan A, Gangly K, Chang, EF. Nature Communications (2022) 13: 6510.

[2] Neuroprosthesis for decoding speech in a paralyzed person with anarthria. Moses DA, Metzger SL, Liu JR, Tu-Chan A, Ganguly K, Chang EF, et al. N Engl J Med. 2021 Jul 15;385(3):217-227.

Links:

Voice, Speech, and Language (National Institute on Deafness and Other Communication Disorders/NIH)

ECoG BMI for Motor and Speech Control (BRAVO) (ClinicalTrials.gov)

Chang Lab (University of California, San Francisco)

NIH Support: National Institute on Deafness and Other Communication Disorders


National Library of Medicine Helps Lead the Way in AI Research

Posted on by

NIH, National Library of Medicine. The earth surrounded by a ring of data
Credit: National Library of Medicine, NIH

Did you know that the NIH’s National Library of Medicine (NLM) has been serving science and society since 1836? From its humble beginning as a small collection of books in the library of the U.S. Army Surgeon General’s office, NLM has grown not only to become the world’s largest biomedical library, but a leader in biomedical informatics and computational health data science research.

Think of NLM as a door through which you pass to connect with health data, literature, medical and scientific information, expertise, and sophisticated mathematical models or images that describe a clinical problem. This intersection of information, people, and technology allows NLM to foster discovery. NLM does so by ensuring that scientists, clinicians, librarians, patients, and the public have access to biomedical information 24 hours a day, 7 days a week.

The NLM also supports two research efforts: the Division of Extramural Programs (EP) and Intramural Research Program (IRP). Both programs are accelerating advances in biomedical informatics, data science, computational biology, and computational health. One of EP’s notable investments is focused on advancing artificial intelligence (AI) methods and reimagining how health care is delivered with the power of AI.

How to teach machines, showing for different piles of pills.
Credit: National Library of Medicine, NIH

With support from NLM, Corey Lester and his colleagues at the University of Michigan College of Pharmacy, Ann Arbor, MI, are using AI to assist in pill verification, a standard procedure in pharmacies across the land. They want to help pharmacists avoid dangerous and costly dispensing errors. To do so, Lester is using AI to develop a real-time computer vision model. It views pills inside of a medication bottle, accurately identifies them, and determines that they are the correct or incorrect contents.

The IRP develops and applies computational methods and approaches to a broad range of information problems in biology, biomedicine, and human health. The IRP also offers intramural training opportunities and supports other training aimed at pre-baccalaureate to postdoctoral students and professionals.

The NLM principal investigators use biological data to advance computer algorithms and connect relationships between any level of biological organization and health conditions. They also use computational health sciences to focus on clinical information processing and analyze clinical data, assess clinical outcomes, and set health data standards.

Four chest x-rays
Credit: National Library of Medicine, NIH

NLM investigator Sameer Antani is collaborating with researchers in other NIH institutes to explore how AI can help us understand oral cancer, echocardiography, and pediatric tuberculosis. His research also is examining how images can be mined for data to predict the causes and outcomes of conditions. Examples of Antani’s work can be found in mobile radiology vehicles, which allow professionals to take chest X-rays (right) and screen for HIV and tuberculosis using software containing algorithms developed in his lab.

For AI to have its full impact, more algorithms and approaches that harness the power of data are needed. That’s why NLM supports hundreds of other intramural and extramural scientists who are addressing challenging health and biomedical problems. The NLM-funded research is focused on how AI can help people stay healthy through early disease detection, disease management, and clinical and treatment decision-making—all leading to the ultimate goal of helping people live healthier and happier lives.

The NLM is proud to lead the way in the use of AI to accelerate discovery and transform health care. Want to learn more? Follow me on Twitter. Or, you can follow my blog, NLM Musings from the Mezzanine and receive periodic NLM research updates.

I would like to thank Valerie Florance, Acting Scientific Director of NLM IRP, and Richard Palmer, Acting Director of NLM Division of EP, for their assistance with this post.

Links:

National Library of Medicine (National Library of Medicine/NIH)

Video: Using Machine Intelligence to Prevent Medication Dispensing Errors (NLM Funding Spotlight)

Video: Sameer Antani and Artificial Intelligence (NLM)

NLM Division of Extramural Programs (NLM)

NLM Intramural Research Program (NLM)

NLM Intramural Training Opportunities (NLM)

Principal Investigators (NLM)

NLM Musings from the Mezzanine (NLM)

Note: Dr. Lawrence Tabak, who performs the duties of the NIH Director, has asked the heads of NIH’s Institutes and Centers (ICs) to contribute occasional guest posts to the blog to highlight some of the interesting science that they support and conduct. This is the 20th in the series of NIH IC guest posts that will run until a new permanent NIH director is in place.


Using AI to Find New Antibiotics Still a Work in Progress

Posted on by

Protein over a computer network

Each year, more than 2.8 million people in the United States develop bacterial infections that don’t respond to treatment and sometimes turn life-threatening [1]. Their infections are antibiotic-resistant, meaning the bacteria have changed in ways that allow them to withstand our current widely used arsenal of antibiotics. It’s a serious and growing health-care problem here and around the world. To fight back, doctors desperately need new antibiotics, including novel classes of drugs that bacteria haven’t seen and developed ways to resist.

Developing new antibiotics, however, involves much time, research, and expense. It’s also fraught with false leads. That’s why some researchers have turned to harnessing the predictive power of artificial intelligence (AI) in hopes of selecting the most promising leads faster and with greater precision.

It’s a potentially paradigm-shifting development in drug discovery, and a recent NIH-funded study, published in the journal Molecular Systems Biology, demonstrates AI’s potential to streamline the process of selecting future antibiotics [2]. The results are also a bit sobering. They highlight the current limitations of one promising AI approach, showing that further refinement will still be needed to maximize its predictive capabilities.

These findings come from the lab of James Collins, Massachusetts Institute of Technology (MIT), Cambridge, and his recently launched Antibiotics-AI Project. His audacious goal is to develop seven new classes of antibiotics to treat seven of the world’s deadliest bacterial pathogens in just seven years. What makes this project so bold is that only two new classes of antibiotics have reached the market in the last 50 years!

In the latest study, Collins and his team looked to an AI program called AlphaFold2 [3]. The name might ring a bell. AlphaFold’s AI-powered ability to predict protein structures was a finalist in Science Magazine’s 2020 Breakthrough of the Year. In fact, AlphaFold has been used already to predict the structures of more than 200 million proteins, or almost every known protein on the planet [4].

AlphaFold employs a deep learning approach that can predict most protein structures from their amino acid sequences about as well as more costly and time-consuming protein-mapping techniques.
In the deep learning models used to predict protein structure, computers are “trained” on existing data. As computers “learn” to understand complex relationships within the training material, they develop a model that can then be applied for making predictions of 3D protein structures from linear amino acid sequences without relying on new experiments in the lab.

Collins and his team hoped to combine AlphaFold with computer simulations commonly used in drug discovery as a way to predict interactions between essential bacterial proteins and antibacterial compounds. If it worked, researchers could then conduct virtual rapid screens of millions of new synthetic drug compounds targeting key bacterial proteins that existing antibiotics don’t. It would also enable the rapid development of antibiotics that work in novel ways, exactly what doctors need to treat antibiotic-resistant infections.

To test the strategy, Collins and his team focused first on the predicted structures of 296 essential proteins from the Escherichia coli bacterium as well as 218 antibacterial compounds. Their computer simulations then predicted how strongly any two molecules (essential protein and antibacterial) would bind together based on their shapes and physical properties.

It turned out that screening many antibacterial compounds against many potential targets in E. coli led to inaccurate predictions. For example, when comparing their computational predictions with actual interactions for 12 essential proteins measured in the lab, they found that their simulated model had about a 50:50 chance of being right. In other words, it couldn’t identify true interactions between drugs and proteins any better than random guessing.

They suspect one reason for their model’s poor performance is that the protein structures used to train the computer are fixed, not flexible and shifting physical configurations as happens in real life. To improve their success rate, they ran their predictions through additional machine-learning models that had been trained on data to help them “learn” how proteins and other molecules reconfigure themselves and interact. While this souped-up model got somewhat better results, the researchers report that they still aren’t good enough to identify promising new drugs and their protein targets.

What now? In future studies, the Collins lab will continue to incorporate and train the computers on even more biochemical and biophysical data to help with the predictive process. That’s why this study should be interpreted as an interim progress report on an area of science that will only get better with time.

But it’s also a sobering reminder that the quest to find new classes of antibiotics won’t be easy—even when aided by powerful AI approaches. We certainly aren’t there yet, but I’m confident that we will get there to give doctors new therapeutic weapons and turn back the rise in antibiotic-resistant infections.

References:

[1] 2019 Antibiotic resistance threats report. Centers for Disease Control and Prevention.

[2] Benchmarking AlphaFold-enabled molecular docking predictions for antibiotic discovery. Wong F, Krishnan A, Zheng EJ, Stark H, Manson AL, Earl AM, Jaakkola T, Collins JJ. Molecular Systems Biology. 2022 Sept 6. 18: e11081.

[3] Highly accurate protein structure prediction with AlphaFold. Jumper J, Evans R, Pritzel A, Kavukcuoglu K, Kohli P, Hassabis D., et al. Nature. 2021 Aug;596(7873):583-589.

[4] ‘The entire protein universe’: AI predicts shape of nearly every known protein. Callaway E. Nature. 2022 Aug;608(7921):15-16.

Links:

Antimicrobial (Drug) Resistance (National Institute of Allergy and Infectious Diseases/NIH)

Collins Lab (Massachusetts Institute of Technology, Cambridge)

The Antibiotics-AI Project, The Audacious Project (TED)

AlphaFold (Deep Mind, London, United Kingdom)

NIH Support: National Institute of Allergy and Infectious Diseases; National Institute of General Medical Sciences


Using AI to Advance Understanding of Long COVID Syndrome

Posted on by

The COVID-19 pandemic continues to present considerable public health challenges in the United States and around the globe. One of the most puzzling is why many people who get over an initial and often relatively mild COVID illness later develop new and potentially debilitating symptoms. These symptoms run the gamut including fatigue, shortness of breath, brain fog, anxiety, and gastrointestinal trouble.

People understandably want answers to help them manage this complex condition referred to as Long COVID syndrome. But because Long COVID is so variable from person to person, it’s extremely difficult to work backwards and determine what these people had in common that might have made them susceptible to Long COVID. The variability also makes it difficult to identify all those who have Long COVID, whether they realize it or not. But a recent study, published in the journal Lancet Digital Health, shows that a well-trained computer and its artificial intelligence can help.

Researchers found that computers, after scanning thousands of electronic health records (EHRs) from people with Long COVID, could reliably make the call. The results, though still preliminary and in need of further validation, point the way to developing a fast, easy-to-use computer algorithm to help determine whether a person with a positive COVID test is likely to battle Long COVID.

In this groundbreaking study, NIH-supported researchers led by Emily Pfaff, University of North Carolina, Chapel Hill, and Melissa Haendel, the University of Colorado Anschutz Medical Campus, Aurora, relied on machine learning. In machine learning, a computer sifts through vast amounts of data to look for patterns. One reason machine learning is so powerful is that it doesn’t require humans to tell the computer which features it should look for. As such, machine learning can pick up on subtle patterns that people would otherwise miss.

In this case, Pfaff, Haendel, and team decided to “train” their computer on EHRs from people who had reported a COVID-19 infection. (The records are de-identified to protect patient privacy.) The researchers found just what they needed in the National COVID Cohort Collaborative (N3C), a national, publicly available data resource sponsored by NIH’s National Center for Advancing Translational Sciences. It is part of NIH’s Researching COVID to Enhance Recovery (RECOVER) initiative, which aims to improve understanding of Long COVID.

The researchers defined a group of more than 1.5 million adults in N3C who either had been diagnosed with COVID-19 or had a record of a positive COVID-19 test at least 90 days prior. Next, they examined common features, including any doctor visits, diagnoses, or medications, from the group’s roughly 100,000 adults.

They fed that EHR data into a computer, along with health information from almost 600 patients who’d been seen at a Long COVID clinic. They developed three machine learning models: one to identify potential long COVID patients across the whole dataset and two others that focused separately on people who had or hadn’t been hospitalized.

All three models proved effective for identifying people with potential Long-COVID. Each of the models had an 85 percent or better discrimination threshold, indicating they are highly accurate. That’s important because, once researchers can identify those with Long COVID in a large database of people such as N3C, they can begin to ask and answer many critical questions about any differences in an individual’s risk factors or treatment that might explain why some get Long COVID and others don’t.

This new study is also an excellent example of N3C’s goal to assemble data from EHRs that enable researchers around the world to get rapid answers and seek effective interventions for COVID-19, including its long-term health effects. It’s also made important progress toward the urgent goal of the RECOVER initiative to identify people with or at risk for Long COVID who may be eligible to participate in clinical trials of promising new treatment approaches.

Long COVID remains a puzzling public health challenge. Another recent NIH study published in the journal Annals of Internal Medicine set out to identify people with symptoms of Long COVID, most of whom had recovered from mild-to-moderate COVID-19 [2]. More than half had signs of Long COVID. But, despite extensive testing, the NIH researchers were unable to pinpoint any underlying cause of the Long COVID symptoms in most cases.

So if you’d like to help researchers solve this puzzle, RECOVER is now enrolling adults and kids—including those who have and have not had COVID—at more than 80 study sites around the country.

References:

[1] Identifying who has long COVID in the USA: a machine learning approach using N3C data. Pfaff ER, Girvin AT, Bennett TD, Bhatia A, Brooks IM, Deer RR, Dekermanjian JP, Jolley SE, Kahn MG, Kostka K, McMurry JA, Moffitt R, Walden A, Chute CG, Haendel MA; N3C Consortium. Lancet Digit Health. 2022 May 16:S2589-7500(22)00048-6.

[2] A longitudinal study of COVID-19 sequelae and immunity: baseline findings. Sneller MC, Liang CJ, Marques AR, Chung JY, Shanbhag SM, Fontana JR, Raza H, Okeke O, Dewar RL, Higgins BP, Tolstenko K, Kwan RW, Gittens KR, Seamon CA, McCormack G, Shaw JS, Okpali GM, Law M, Trihemasava K, Kennedy BD, Shi V, Justement JS, Buckner CM, Blazkova J, Moir S, Chun TW, Lane HC. Ann Intern Med. 2022 May 24:M21-4905.

Links:

COVID-19 Research (NIH)

National COVID Cohort Collaborative (N3C) (National Center for Advancing Translational Sciences/NIH)

RECOVER Initiative

Emily Pfaff (University of North Carolina, Chapel Hill)

Melissa Haendel (University of Colorado, Aurora)

NIH Support: National Center for Advancing Translational Sciences; National Institute of General Medical Sciences; National Institute of Allergy and Infectious Diseases


Artificial Intelligence Getting Smarter! Innovations from the Vision Field

Posted on by

AI. Photograph of retina

One of many health risks premature infants face is retinopathy of prematurity (ROP), a leading cause of childhood blindness worldwide. ROP causes abnormal blood vessel growth in the light-sensing eye tissue called the retina. Left untreated, ROP can lead to lead to scarring, retinal detachment, and blindness. It’s the disease that caused singer and songwriter Stevie Wonder to lose his vision.

Now, effective treatments are available—if the disease is diagnosed early and accurately. Advancements in neonatal care have led to the survival of extremely premature infants, who are at highest risk for severe ROP. Despite major advancements in diagnosis and treatment, tragically, about 600 infants in the U.S. still go blind each year from ROP. This disease is difficult to diagnose and manage, even for the most experienced ophthalmologists. And the challenges are much worse in remote corners of the world that have limited access to ophthalmic and neonatal care.

Caption: Image of a neonatal retina prior to AI processing. Left: Image of a premature infant retina showing signs of severe ROP with large, twisted blood vessels; Right: Normal neonatal retina by comparison. Credit: Casey Eye Institute, Oregon Health and Science University, Portland, and National Eye Institute, NIH

Artificial intelligence (AI) is helping bridge these gaps. Prior to my tenure as National Eye Institute (NEI) director, I helped develop a system called i-ROP Deep Learning (i-ROP DL), which automates the identification of ROP. In essence, we trained a computer to identify subtle abnormalities in retinal blood vessels from thousands of images of premature infant retinas. Strikingly, the i-ROP DL artificial intelligence system outperformed even international ROP experts [1]. This has enormous potential to improve the quality and delivery of eye care to premature infants worldwide.

Of course, the promise of medical artificial intelligence extends far beyond ROP. In 2018, the FDA approved the first autonomous AI-based diagnostic tool in any field of medicine [2]. Called IDx-DR, the system streamlines screening for diabetic retinopathy (DR), and its results require no interpretation by a doctor. DR occurs when blood vessels in the retina grow irregularly, bleed, and potentially cause blindness. About 34 million people in the U.S. have diabetes, and each is at risk for DR.

As with ROP, early diagnosis and intervention is crucial to preventing vision loss to DR. The American Diabetes Association recommends people with diabetes see an eye care provider annually to have their retinas examined for signs of DR. Yet fewer than 50 percent of Americans with diabetes receive these annual eye exams.

The IDx-DR system was conceived by Michael Abramoff, an ophthalmologist and AI expert at the University of Iowa, Iowa City. With NEI funding, Abramoff used deep learning to design a system for use in a primary-care medical setting. A technician with minimal ophthalmology training can use the IDx-DR system to scan a patient’s retinas and get results indicating whether a patient should be sent to an eye specialist for follow-up evaluation or to return for another scan in 12 months.

Caption: The IDx-DR is the first FDA-approved system for diagnostic screening of diabetic retinopathy. It’s designed to be used in a primary care setting. Results determine whether a patient needs immediate follow-up. Credit: Digital Diagnostics, Coralville, IA.

Many other methodological innovations in AI have occurred in ophthalmology. That’s because imaging is so crucial to disease diagnosis and clinical outcome data are so readily available. As a result, AI-based diagnostic systems are in development for many other eye diseases, including cataract, age-related macular degeneration (AMD), and glaucoma.

Rapid advances in AI are occurring in other medical fields, such as radiology, cardiology, and dermatology. But disease diagnosis is just one of many applications for AI. Neurobiologists are using AI to answer questions about retinal and brain circuitry, disease modeling, microsurgical devices, and drug discovery.

If it sounds too good to be true, it may be. There’s a lot of work that remains to be done. Significant challenges to AI utilization in science and medicine persist. For example, researchers from the University of Washington, Seattle, last year tested seven AI-based screening algorithms that were designed to detect DR. They found under real-world conditions that only one outperformed human screeners [3]. A key problem is these AI algorithms need to be trained with more diverse images and data, including a wider range of races, ethnicities, and populations—as well as different types of cameras.

How do we address these gaps in knowledge? We’ll need larger datasets, a collaborative culture of sharing data and software libraries, broader validation studies, and algorithms to address health inequities and to avoid bias. The NIH Common Fund’s Bridge to Artificial Intelligence (Bridge2AI) project and NIH’s Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Program project will be major steps toward addressing those gaps.

So, yes—AI is getting smarter. But harnessing its full power will rely on scientists and clinicians getting smarter, too.

References:

[1] Automated diagnosis of plus disease in retinopathy of prematurity using deep convolutional neural networks. Brown JM, Campbell JP, Beers A, Chang K, Ostmo S, Chan RVP, Dy J, Erdogmus D, Ioannidis S, Kalpathy-Cramer J, Chiang MF; Imaging and Informatics in Retinopathy of Prematurity (i-ROP) Research Consortium. JAMA Ophthalmol. 2018 Jul 1;136(7):803-810.

[2] FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. Food and Drug Administration. April 11, 2018.

[3] Multicenter, head-to-head, real-world validation study of seven automated artificial intelligence diabetic retinopathy screening systems. Lee AY, Yanagihara RT, Lee CS, Blazes M, Jung HC, Chee YE, Gencarella MD, Gee H, Maa AY, Cockerham GC, Lynch M, Boyko EJ. Diabetes Care. 2021 May;44(5):1168-1175.

Links:

Retinopathy of Prematurity (National Eye Institute/NIH)

Diabetic Eye Disease (NEI)

NEI Research News

Michael Abramoff (University of Iowa, Iowa City)

Bridge to Artificial Intelligence (Common Fund/NIH)

Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Program (NIH)

[Note: Acting NIH Director Lawrence Tabak has asked the heads of NIH’s institutes and centers to contribute occasional guest posts to the blog as a way to highlight some of the cool science that they support and conduct. This is the second in the series of NIH institute and center guest posts that will run until a new permanent NIH director is in place.]


Next Page