Agentic Sign Language: Balanced Evaluation and Adaptive Monitoring for Inclusive Multimodal Communication
DOI:
https://doi.org/10.31224/5951Keywords:
sign language, agentic AI, large language models, generative AI, adaptive monitoring, balanced evaluationAbstract
Sign languages are rich visual languages used by tens of millions of people worldwide, yet there is a persistent shortage of trained human interpreters. Recent work on small-vocabulary interpreters shows that lightweight convolutional neural networks can recognise static finger-spelling with high accuracy [1]. However, these prototypes are limited to isolated signs, depend on homogeneous training data and omit the complex grammar, facial expressions and body movements that convey meaning in continuous sign language. This paper proposes a comprehensive architecture that leverages recent advances in agentic artificial intelligence (AI), large language models (LLMs) and generative AI to deliver end-to-end sign language communication. Our design integrates multimodal data acquisition, spatio-temporal sign recognition, LLM-based translation, generative sign synthesis and an agentic orchestration layer. We outline data collection strategies, model architectures, training protocols, ethical considerations and a roadmap toward inclusive, real-time sign language translation and generation.
Downloads
Downloads
Posted
Versions
- 2025-12-12 (2)
- 2025-12-08 (1)
License
Copyright (c) 2025 Manish Shukla, Jithesh Yemi Reddy

This work is licensed under a Creative Commons Attribution 4.0 International License.