Skip to main content


Giving Thanks for Biomedical Research

Posted on by

This Thanksgiving, Americans have an abundance of reasons to be grateful—loving family and good food often come to mind. Here’s one more to add to the list: exciting progress in biomedical research. To check out some of that progress, I encourage you to watch this short video, produced by NIH’s National Institute of Biomedical Imaging and Engineering (NIBIB), that showcases a few cool gadgets and devices now under development.

Among the technological innovations is a wearable ultrasound patch for monitoring blood pressure [1]. The patch was developed by a research team led by Sheng Xu and Chonghe Wang, University of California San Diego, La Jolla. When this small patch is worn on the neck, it measures blood pressure in the central arteries and veins by emitting continuous ultrasound waves.

Other great technologies featured in the video include:

Laser-Powered Glucose Meter. Peter So and Jeon Woong Kang, researchers at Massachusetts Institute of Technology (MIT), Cambridge, and their collaborators at MIT and University of Missouri, Columbia have developed a laser-powered device that measures glucose through the skin [2]. They report that this device potentially could provide accurate, continuous glucose monitoring for people with diabetes without the painful finger pricks.

15-Second Breast Scanner. Lihong Wang, a researcher at California Institute of Technology, Pasadena, and colleagues have combined laser light and sound waves to create a rapid, noninvasive, painless breast scan. It can be performed while a woman rests comfortably on a table without the radiation or compression of a standard mammogram [3].

White Blood Cell Counter. Carlos Castro-Gonzalez, then a postdoc at Massachusetts Institute of Technology, Cambridge, and colleagues developed a portable, non-invasive home monitor to count white blood cells as they pass through capillaries inside a finger [4]. The test, which takes about 1 minute, can be carried out at home, and will help those undergoing chemotherapy to determine whether their white cell count has dropped too low for the next dose, avoiding risk for treatment-compromising infections.

Neural-Enabled Prosthetic Hand (NEPH). Ranu Jung, a researcher at Florida International University, Miami, and colleagues have developed a prosthetic hand that restores a sense of touch, grip, and finger control for amputees [5]. NEPH is a fully implantable, wirelessly controlled system that directly stimulates nerves. More than two years ago, the FDA approved a first-in-human trial of the NEPH system.

If you want to check out more taxpayer-supported innovations, take a look at NIBIB’s two previous videos from 2013 and 2018 As always, let me offer thanks to you from the NIH family—and from all Americans who care about the future of their health—for your continued support. Happy Thanksgiving!


[1] Monitoring of the central blood pressure waveform via a conformal ultrasonic device. Wang C, Li X, Hu H, Zhang, L, Huang Z, Lin M, Zhang Z, Yun Z, Huang B, Gong H, Bhaskaran S, Gu Y, Makihata M, Guo Y, Lei Y, Chen Y, Wang C, Li Y, Zhang T, Chen Z, Pisano AP, Zhang L, Zhou Q, Xu S. Nature Biomedical Engineering. September 2018, 687-695.

[2] Evaluation of accuracy dependence of Raman spectroscopic models on the ratio of calibration and validation points for non-invasive glucose sensing. Singh SP, Mukherjee S, Galindo LH, So PTC, Dasari RR, Khan UZ, Kannan R, Upendran A, Kang JW. Anal Bioanal Chem. 2018 Oct;410(25):6469-6475.

[3] Single-breath-hold photoacoustic computed tomography of the breast. Lin L, Hu P, Shi J, Appleton CM, Maslov K, Li L, Zhang R, Wang LV. Nat Commun. 2018 Jun 15;9(1):2352.

[4] Non-invasive detection of severe neutropenia in chemotherapy patients by optical imaging of nailfold microcirculation. Bourquard A, Pablo-Trinidad A, Butterworth I, Sánchez-Ferro Á, Cerrato C, Humala K, Fabra Urdiola M, Del Rio C, Valles B, Tucker-Schwartz JM, Lee ES, Vakoc BJ9, Padera TP, Ledesma-Carbayo MJ, Chen YB, Hochberg EP, Gray ML, Castro-González C. Sci Rep. 2018 Mar 28;8(1):5301.

[5] Enhancing Sensorimotor Integration Using a Neural Enabled Prosthetic Hand System


Sheng Xu Lab (University of California San Diego, La Jolla)

So Lab (Massachusetts Institute of Technology, Cambridge)

Lihong Wang (California Institute of Technology, Pasadena)

Video: Lihong Wang: Better Cancer Screenings

Carlos Castro-Gonzalez (Madrid-MIT M + Visión Consortium, Cambridge, MA)

Video: Carlos Castro-Gonzalez (YouTube)

Ranu Jung (Florida International University, Miami)

Video: New Prosthetic System Restores Sense of Touch (Florida International)

NIH Support: National Institute of Biomedical Imaging and Bioengineering; National Institute of Neurological Diseases and Stroke; National Heart, Lung, and Blood Institute; National Cancer Institute; Common Fund

Can a Mind-Reading Computer Speak for Those Who Cannot?

Posted on by

Credit: Adapted from Nima Mesgarani, Columbia University’s Zuckerman Institute, New York

Computers have learned to do some amazing things, from beating the world’s ranking chess masters to providing the equivalent of feeling in prosthetic limbs. Now, as heard in this brief audio clip counting from zero to nine, an NIH-supported team has combined innovative speech synthesis technology and artificial intelligence to teach a computer to read a person’s thoughts and translate them into intelligible speech.

Turning brain waves into speech isn’t just fascinating science. It might also prove life changing for people who have lost the ability to speak from conditions such as amyotrophic lateral sclerosis (ALS) or a debilitating stroke.

When people speak or even think about talking, their brains fire off distinctive, but previously poorly decoded, patterns of neural activity. Nima Mesgarani and his team at Columbia University’s Zuckerman Institute, New York, wanted to learn how to decode this neural activity.

Mesgarani and his team started out with a vocoder, a voice synthesizer that produces sounds based on an analysis of speech. It’s the very same technology used by Amazon’s Alexa, Apple’s Siri, or other similar devices to listen and respond appropriately to everyday commands.

As reported in Scientific Reports, the first task was to train a vocoder to produce synthesized sounds in response to brain waves instead of speech [1]. To do it, Mesgarani teamed up with neurosurgeon Ashesh Mehta, Hofstra Northwell School of Medicine, Manhasset, NY, who frequently performs brain mapping in people with epilepsy to pinpoint the sources of seizures before performing surgery to remove them.

In five patients already undergoing brain mapping, the researchers monitored activity in the auditory cortex, where the brain processes sound. The patients listened to recordings of short stories read by four speakers. In the first test, eight different sentences were repeated multiple times. In the next test, participants heard four new speakers repeat numbers from zero to nine.

From these exercises, the researchers reconstructed the words that people heard from their brain activity alone. Then the researchers tried various methods to reproduce intelligible speech from the recorded brain activity. They found it worked best to combine the vocoder technology with a form of computer artificial intelligence known as deep learning.

Deep learning is inspired by how our own brain’s neural networks process information, learning to focus on some details but not others. In deep learning, computers look for patterns in data. As they begin to “see” complex relationships, some connections in the network are strengthened while others are weakened.

In this case, the researchers used the deep learning networks to interpret the sounds produced by the vocoder in response to the brain activity patterns. When the vocoder-produced sounds were processed and “cleaned up” by those neural networks, it made the reconstructed sounds easier for a listener to understand as recognizable words, though this first attempt still sounds pretty robotic.

The researchers will continue testing their system with more complicated words and sentences. They also want to run the same tests on brain activity, comparing what happens when a person speaks or just imagines speaking. They ultimately envision an implant, similar to those already worn by some patients with epilepsy, that will translate a person’s thoughts into spoken words. That might open up all sorts of awkward moments if some of those thoughts weren’t intended for transmission!

Along with recently highlighted new ways to catch irregular heartbeats and cervical cancers, it’s yet another remarkable example of the many ways in which computers and artificial intelligence promise to transform the future of medicine.


[1] Towards reconstructing intelligible speech from the human auditory cortex. Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Sci Rep. 2019 Jan 29;9(1):874.


Advances in Neuroprosthetic Learning and Control. Carmena JM. PLoS Biol. 2013;11(5):e1001561.

Nima Mesgarani (Columbia University, New York)

NIH Support: National Institute on Deafness and Other Communication Disorders; National Institute of Mental Health