Preprint / Version 2

AURA: An AI-Powered Multimodal Prototype for Adaptive Apraxia of Speech Therapy and Communication Support

##article.authors##

DOI:

https://doi.org/10.31224/6608

Keywords:

Apraxia of Speech, Multimodal AI, Augmentative and Alternative Communication (AAC), Reinforcement Learning, Speech Therapy, Adaptive Learning, Motor Learning Guided (MLG) Therapy

Abstract

Apraxia of Speech (AOS) is a motor speech disorder that significantly  limits communication and requires intensive, long-term therapy. Access to con sistent treatment is often constrained by shortages of speech-language  pathologists, high costs, and limited opportunities for continuous monitoring out side clinical settings. Recent advances in Artificial Intelligence (AI) provide new  opportunities to support scalable and personalized speech therapy. 

This paper presents AURA (Adaptive Understanding and Relearning Assistant  for Apraxia), a multimodal AI framework designed to support speech therapy,  progress monitoring, and communication for individuals with AOS. The system  integrates speech analysis, machine learning–based error detection, reinforce ment learning for adaptive therapy, and multimodal Augmentative and Alterna tive Communication (AAC) support. An initial research prototype has been de veloped to demonstrate the feasibility of the proposed architecture and workflow.  We describe the system architecture, its alignment with evidence-based motor  learning principles, and a phased evaluation plan involving expert review, simu lated testing, and pilot studies. The proposed framework aims to improve therapy  accessibility, engagement, and data-driven intervention for individuals with  AOS.

Downloads

Download data is not yet available.

Additional Files

Posted

2026-03-16 — Updated on 2026-03-28

Versions

Version justification

Minor revision to the manuscript, including wording improvements in the introduction and addition of relevant in-text citations. The technical content and results remain unchanged.