Get featured on INDIAai

Contribute your expertise or opinions and become part of the ecosystem!

Google's AI researchers earlier this month announced Project Euphonia, a speech-to-text transcription service for people with speaking impairments. The service, developed in collaborations with the ALS Therapy Development Institute, can improve automatic speech recognition for people with non-native English accents.

Patients who have amyotrophic lateral sclerosis or ALS often have slurred speech with all of the existing AI systems capable of serving only people without any speech impediment, since systems are trained on voice data from those without any speech difficulty or accent.

Project Euphonia, on the other hand, use small quantities of data representing people with accents and ALS. "We show that 71% of the improvement comes from only five minutes of training data," researchers wrote in a paper titled "Personalizing ASR for Dysarthric and Accented Speech with Limited Data," which was published on July 31.

Written by 12 coauthors, the paper will be presented at the International Speech Communication Association, which takes place September 15-19 in Graz, Austria

Interestingly, these models were able to archive 62% and 35% relative word error rate (WER) improvement for ALS and accents, respectively. With the help of the ALS Therapy Development Institute, the researchers were able to train the model on ALS speech data set consists of 36 hours of audio from 67 people with ALS.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in