Image

Simplified Ultrasound Feedback for Speech Remediation

Recruiting
7 - 17 years of age
Both
Phase N/A

Powered by AI

Overview

Recent research in motor control shows that people learn new movements best when they receive feedback external to the body. Traditional ultrasound speech therapy works well for many children, but involves teaching children to focus on their internal tongue movements. The goal of the study is to test whether ultrasound biofeedback delivered without showing children a display of their tongue movements will be effective as a treatment for residual speech sound disorders in children. We focus on children who have trouble producing the sound "r" as in "rabbit". The first aim is to develop a fast reliable system to track movements of different parts of the tongue using ultrasound and to identify which combinations of movements will produce a good "r" and which do not. The second aim is to develop a motivational game in which children receive feedback on the success of their tongue movements by what happens to an animated character on a screen. This developed version of ultrasound feedback therapy will be compared to the traditional version of ultrasound feedback therapy to determine how the two approaches can best be utilized in the clinic.

Description

Ultrasound biofeedback therapy (UBT), offering a real-time view of the tongue during speech, has shown promise in treating residual speech sound disorders. However, the clinical utility of UBT is limited by the inherent difficulty of interpreting real-time ultrasound images of rapidly changing tongue deformations during speech. For patients to receive effective feedback from these images, clinicians must provide extensive explanation and orientation over multiple sessions. Especially for younger patients, this rich, complex visual feedback directs attention internally, a direction known to reduce performance and impulse force control during limb and oral movement tasks. Further, since speech sound disorders often coexist with cognitive or behavioral difficulties, many patients never master interpretation of tongue ultrasound images. Thus, the tremendous promise of UBT has not been realized because its feedback is too complex, misdirected, and ambiguous.

Scientific studies on implicit motor learning suggest UBT would be improved if (1) speakers are guided by simpler visual feedback and (2) the simplification engages an external attentional focus. Feedback with these qualities is known to facilitate rapid, robust sensorimotor skill learning. Application of these principles into UBT is expected to provide great benefits to those with speech-sound disorders.

The major goal of this project is to translate these recent advances in UBT and motor learning into a new simplified ultrasound biofeedback system for better clinical treatment of residual speech sound disorders. Our proposed system will transform the complex tongue movements captured by ultrasound into simplified real-time feedback displays, customized for each user to drive tongue movements closer to goal movement patterns. We will concentrate on remediation of /r/ and /l/, the most complex and frustrating sounds in clinical speech therapy. Using a novel method for tongue motion tracking based on processing of real-time ultrasound images, we will automatically characterize differentiated motion of tongue parts during /r/ and /l/ production. We will employ statistical cluster analysis to identify biofeedback targets indicating correct production. These targets will be employed in a prototype simplified UBT system employing simple, engaging visual feedback to guide patients toward correct tongue movement, presented by a highly motivating, gamified interface. Our simplified UBT system will be validated in a pilot clinical trial, testing the hypotheses that simplified UBT is more effective than standard UBT, and that the most effective biofeedback targets will be those promoting greater differentiation of tongue motion. The end result will be translation of recent advances in ultrasound imaging of speech, real-time image processing, and biofeedback-based motor planning into a novel clinical UBT tool that will greatly advance treatment of residual speech sound disorders.

Eligibility

Inclusion Criteria:

  • Difficulty producing "r" (American English rhotic, ex. first sound in "ray" or last sound in "or"), scoring less than 20% accuracy on 70 word probe
  • Speaks American English fluently
  • Between ages of 7 and 17 years old

Exclusion Criteria:

  • Does not pass hearing screening

Study details

Speech Sound Disorder

NCT06366256

University of Cincinnati

3 May 2024

Step 1 Get in touch with the nearest study center
What happens next?
  • You can expect the study team to contact you via email or phone in the next few days.
  • Sign up as volunteer  to help accelerate the development of new treatments and to get notified about similar trials.

You are contacting

Investigator Avatar

Primary Contact

site

FAQs

Learn more about clinical trials

What is a clinical trial?

A clinical trial is a study designed to test specific interventions or treatments' effectiveness and safety, paving the way for new, innovative healthcare solutions.

Why should I take part in a clinical trial?

Participating in a clinical trial provides early access to potentially effective treatments and directly contributes to the healthcare advancements that benefit us all.

How long does a clinical trial take place?

The duration of clinical trials varies. Some trials last weeks, some years, depending on the phase and intention of the trial.

Do I get compensated for taking part in clinical trials?

Compensation varies per trial. Some offer payment or reimbursement for time and travel, while others may not.

How safe are clinical trials?

Clinical trials follow strict ethical guidelines and protocols to safeguard participants' health. They are closely monitored and safety reviewed regularly.
Add a private note
  • abc Select a piece of text.
  • Add notes visible only to you.
  • Send it to people through a passcode protected link.