Dysarthria is a condition which hampers the ability of an individual to control the muscles that play a major role in speech delivery. The loss of fine control over muscles that assist the movement of lips, vocal chords, tongue and diaphragm results in abnormal speech delivery.

One can assess the severity level of dysarthria by analyzing the intelligibility of speech spoken by an individual. Continuous intelligibility assessment helps speech language pathologists not only study the impact of medication but also allows them to plan personalized therapy. It helps the clinicians immensely if the intelligibility assessment system is

  1. reliable,
  2. automatic, and
  3. simple

for both (a) patients to undergo and (b) clinicians to interpret.

Lack of availability of dysarthric data has resulted in development of speaker dependent automatic intelligibility assessment systems which requires patients to speak a very large number of utterances.

We propose an AI based speech intelligibility system which allows

  1. the dysarthric patient to speak a very small number of utterances without sacrificing the accuracy of intelligibility estimation, and more importantly
  2. the AI based assessment score is very close to the perceptual score that the Speech Language Pathologist (SLP) can relate to.

The need for small number of utterances to be spoken by the patient and the score being relatable to the SLP benefits both the dysarthric patient and the clinician from usability perspective.

Sources of Article

https://www.sciencedirect.com/science/article/abs/pii/S088523082; https://sunilkopparapu.blogspot.com/2021/03/why-is-it-hard-to-recognize.html1000206

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in