Results for ""
The COVID-19 virus, while deadly is often hidden in plain sight. Research suggests that one in five COVID-19 infected people are asymptomatic. These people are however able to spread the virus unknowingly which can prove fatal for those around them.
While the earlier medical understanding of asymptomatic people defined them as 'people exhibiting no signs of illness', a recent study by MIT researchers has uncovered that asympomatic people do show symptoms. The researchers have found out that asymptomatic people differ from the healthy individuals in the way they cough. While these coughs are undistinguishable from a ealthy cough to the human ears, to an artificiial intelligence (AI) system, they are different!
The team released their findings recently in the IEEE Journal of Engineering in Medicine and Biology. They have created an AI model that can differentiate between asymptomatic people from healthy individuals through recordings of their forced coughs. The team announced their discovery via a press release, available on the MIT website.
The AI model was trained on thousands of cough samples, as well as spoken words. The researchers then tested the model based on voluntary recordings submitted by people via web browsers and smart devices such as laptops and cell phones. The model was 98.5% accurate in identifying people who were confirmed COVID-19 patients, including 100% correct diagnosis of asympomatic patients who were tested positive for the virus.
Now, the team is working on creating a user-friendly app around the AI model which can be launched and adopted across continents as free, easily accessible and non-invasive screening tool that can be used daily to potentially identify asympomatic people. “The effective implementation of this group diagnostic tool could diminish the spread of the pandemic if everyone uses it before going to a classroom, a factory, or a restaurant,” says co-author Brian Subirana, a research scientist in MIT’s Auto-ID Laboratory who undertook the research along with Jordi Laguarta and Ferran Hueto, of MIT’s Auto-ID Laboratory.
While the research is pertinent in the times of this pandemic, MIT researchers have been working on this project earlier on. They were developing AI models to analyse forced-cough recordings to daignose Alzheimer's, a disesas that not only affects memory, but also causes neuromuscular degradation such as weakened vocal cords.
The initial step was to train a general machine-learning algorithm, or neural network, known as ResNet50, to discriminate sounds based on different levels of vocal cord strength. "Studies have shown that the quality of the sound 'mmmm' can be an indication of how weak or strong a person’s vocal cords are. Subirana trained the neural network on an audiobook dataset with more than 1,000 hours of speech, to pick out the word 'them' from other words like 'the' and 'then' says the press release by MIT.
As a next step, the team developed a neural network to ascertain emotional states of people through speech because Alzheimer's patients have displayed specific emotions such as frustration, confusion more often than happiness and calmness. Thirdly, they trained a third neural network on a database of coughs in order to discern changes in lung and respiratory performance.
All the three networks were then overlaid with an algorithm which recognises muscular degradation. "The algorithm does so by essentially simulating an audio mask, or layer of noise, and distinguishing strong coughs — those that can be heard over the noise — over weaker ones," states the press release by MIT.
While the research gained success in identifying Alzheimers' patients better than other models, the researchers wondered if the model would be able to work for identifying COVID-19 patients since the symptoms for COVID-19 were similar to Alzheimers on a neurological level such as neuromuscular impariment.
“The sounds of talking and coughing are both influenced by the vocal cords and surrounding organs. This means that when you talk, part of your talking is like coughing, and vice versa. It also means that things we easily derive from fluent speech, AI can pick up simply from coughs, including things like the person’s gender, mother tongue, or even emotional state. There’s in fact sentiment embedded in how you cough,” Subirana says. “So we thought, why don’t we try these Alzheimer’s biomarkers [to see if they’re relevant] for Covid.”
The team created a website to crowdsource cough recordings along with a suvery form that captured their symptoms, whether they had COVID-19 or not, their diagnosis, gender, location and language. Through the website, more than 70,000 recording have been submitted which contain at least 200,000 forced-cough samples. IT is the “the largest research cough dataset that we know of," says Subirana.
The team created a dataset of 2,500 COVID-associated recordings and 2,500 randomised recordings and then used 4,000 of these samples to train model. They tested the remaining 1,000 samples to understand the effectiveness of the training.
The researchers have discovered “a striking similarity between Alzheimer’s and Covid discrimination.” The model was able to able to correctly diagnose COVID-19 from the samples based on four bio-markers - vocal cord strength, sentiment, lungs and respiratory performance, and muscular performance without tweaking any setting of the AI-model which was originally meant to identify Alzheimers' patients. “We think this shows that the way you produce sound, changes when you have Covid, even if you’re asymptomatic,” Subirana says.
While the tool is highly effective in diagnosing asymptomatic COVID-19 patients, Subirana stresses, that is not meant to be a stand-along measure for diagnosis; it should be relied for distinguishing between asymptomatic coughs and healthy coughs and rely on medical expertise for correct diagnosis.
The team plans to develop a free pre-screening app while making the system more effective; they're partnering with hospitals around the world to strengthen and diversify their set of cough recordings, which will help to train and strengthen the model’s accuracy.
Image from Christine Daniloff, MIT