This is an outdated version published on 2025-12-08. Read the most recent version.
Preprint / Version 1

Agentic Sign Language: Balanced Evaluation and Adaptive Monitoring for Inclusive Multimodal Communication

##article.authors##

  • Manish Shukla NA
  • Jithesh Yemi Reddy

DOI:

https://doi.org/10.31224/5951

Keywords:

sign language, agentic AI, large language models, generative AI, adaptive monitoring, balanced evaluation

Abstract

Sign languages are rich visual languages used by tens of millions of people worldwide, yet there is a persistent shortage of trained human interpreters. Recent work on small-vocabulary interpreters shows that lightweight convolutional neural networks can recognise static finger-spelling with high accuracy [1]. However, these prototypes are limited to isolated signs, depend on homogeneous training data and omit the complex grammar, facial expressions and body movements that convey meaning in continuous sign language. This paper proposes a comprehensive architecture that leverages recent advances in agentic artificial intelligence (AI), large language models (LLMs) and generative AI to deliver end-to-end sign language communication. Our design integrates multimodal data acquisition, spatio-temporal sign recognition, LLM-based translation, generative sign synthesis and an agentic orchestration layer. We outline data collection strategies, model architectures, training protocols, ethical considerations and a roadmap toward inclusive, real-time sign language translation and generation.

Downloads

Download data is not yet available.

Downloads

Posted

2025-12-08

Versions