Journal of Intelligent Systems and Internet of Things

Journal DOI

https://doi.org/10.54216/JISIoT

Submit Your Paper

2690-6791ISSN (Online) 2769-786XISSN (Print)

Volume 18 , Issue 2 , PP: 304-314, 2026 | Cite this article as | XML | Html | PDF | Full Length Article

Integrating Artificial Intelligence Driven Computer Vision Framework for Enhanced Sign Language Recognition in Hearing and Speech-Impaired People

Inderjeet Kaur 1 , P. Udayakumar 2 , B. Arundhati 3 , M. V. Rajesh 4 , Naif Almakayeel 5 * , Elvir Akhmetshin 6

  • 1 Department of Computer Science and Engineering, Ajay Kumar Garg Engineering College Ghaziabad, India - (inderjeetk@gmail.com)
  • 2 Department of Computer Science and Engineering, Akshaya College of Engineering and Technology, Kinathukadavu, Coimbatore, India - (udayakumarp@acetcbe.edu.in)
  • 3 Department of EEE, Vignan's Institute of Engineering for Women, Visakhapatnam, Andhra Pradesh, India - (b_arundhati@rediffmail.com)
  • 4 Department of Information Technology, Aditya University, Surampalem, India - (rajesh.masina@adityauniversity.in)
  • 5 Department of Industrial Engineering, College of Engineering, King Khalid University, Abha, Saudi Arabia - (halmakaeel@kku.edu.sa)
  • 6 Department of Science, Innovations and Technology, Mamun University, 220900, Khiva, Uzbekistan - (akhmetshin@mamunedu.uz)
  • Doi: https://doi.org/10.54216/JISIoT.180221

    Received: April 07, 2025 Revised: June 26, 2025 Accepted: August 18, 2025
    Abstract

    Sign language (SL) detection and classification for deaf persons is an essential application of machine learning (ML) and computer vision (CV) techniques. It covers emerging forms, which acquire SL implemented by entities and convert them into auditory or textual output. It is highly significant to understand that determining a correct and robust SL detection approach is a very challenging due to many tasks such as alterations in occlusions, and lighting states in hand actions and forms. Consequently, the CV and ML models is must for testing and training. A Hand gesture detection method discovers beneficial for hearing and speaking-impaired individuals by creating usage of convolutional neural network (CNN) and human-computer interface (HCI) for classifying the constant signals of SL. In this article, an Improved Fennec Fox Algorithm for Deep Learning-Based Sign Language Recognition in Hearing and Speaking Impaired People (IFFADL-SLRHSIP) technique is proposed. The presented IFFADL-SLRHSIP technique main intention is to provide effectual communication between deaf and dumb persons and normal persons utilizing CV and artificial intelligence techniques. In the IFFADL-SLRHSIP model, the enhanced SqueezeNet model is used to capture the intricate patterns and nuances of SL gestures. For detection of the SL classification process, the recurrent neural network (RNN) method is used. To optimize model performance, the improved fennec fox algorithm (IFFA) is applied for parameter tuning, enhancing the model's precision and efficiency. The experimental outputs of the IFFADL-SLRHSIP algorithm are legalized on the SL dataset. The simulation outcomes demonstrate the greater outcomes of the IFFADL-SLRHSIP approach in terms of diverse measures.

    Keywords :

    Sign Language Recognition , Fennec Fox Algorithm , Computer Vision , Deep Learning , Hearing-Impaired Person

    References

    [1]       R. Rastgoo, K. Kiani, and S. Escalera, "Hand sign language recognition using multi-view hand skeleton," Expert Syst. Appl., vol. 150, p. 113336, 2020.

     

    [2]       R. Selvanambi, M. Karuppiah, and M. Islabudeen, "Mobile application-based sign language detector for deaf people," in Designing and Developing Innovative Mobile Applications, IGI Global, 2023, pp. 329–350.

     

    [3]       M. M. Alnfiai, "Deep learning-based sign language recognition for hearing and speaking impaired people," Intell. Autom. Soft Comput., vol. 36, no. 2, pp. 1653–1669, 2023.

     

    [4]       Jiang, Y. Wu, and A. Demosthenous, "Hand Gesture Recognition Using Three-Dimensional Electrical Impedance Tomography," IEEE Trans. Circuits Syst. II Express Briefs, vol. 67, no. 5, pp. 1554–1558, 2020.

     

    [5]       Y. Li et al., "Finger gesture recognition using a smartwatch with integrated motion sensors," Web Intell., vol. 16, no. 2, pp. 123–129, 2018.

     

    [6]       P. Kumar, R. Saini, P. P. Roy, and D. P. Dogra, “A position and rotation invariant framework for sign language recognition (SLR) using Kinect,” Multimedia Tools Appl., vol. 77, no. 7, pp. 8823–8846, 2018.

     

    [7]       Alrowais et al., "Sign language recognition and classification model to enhance quality of disabled people," Comput. Mater. Continua, vol. 73, no. 2, pp. 3419–3432, 2022.

     

    [8]       H. M. Alshahrani, A. A. Alharthi, M. S. Alzahrani, and R. A. Alhassan, "An Enhanced Framework for Real-Time Sign Language Recognition Using Deep Learning Techniques," Comput., Mater. & Continua, vol. 69, no. 1, pp. 123–140, 2023.

     

    [9]       S. Hossain et al., "Bengali hand sign gestures recognition using convolutional neural network," in Proc. 2020 Second Int. Conf. Inventive Res. Comput. Appl. (ICIRCA), Coimbatore, India, Jul. 15-17, 2020, pp. 636–641.

     

    [10]    J. Shi and Z. Dai, "Research on Gesture Recognition Method Based on EMG Signal and Design of Rehabilitation Training System," in Proc. IEEE 3rd Adv. Inf. Technol., Electron. Autom. Control Conf., Chongqing, China, Oct. 12–14, 2018, pp. 835–838.

     

    [11]    D. R. Kothadiya, C. M. Bhatt, H. Kharwa, and F. Albu, "Hybrid InceptionNet based Enhanced Architecture for Isolated Sign Language Recognition," IEEE Access, 2024.

     

    [12]    M. Buttar et al., "Deep learning in sign language recognition: a hybrid approach for the recognition of static and dynamic signs," Mathematics, vol. 11, no. 17, p. 3729, 2023.

     

    [13]    Padmaja, D. R. Kumari, M. Anitha, and S. Kasanagottu, "Indian sign language recognition in real time for deaf and dumb," in AIP Conf. Proc., vol. 2971, no. 1, Jun. 2024.

     

    [14]    S. Das et al., "A hybrid approach for Bangla sign language recognition using deep transfer learning model with random forest classifier," Expert Syst. Appl., vol. 213, p. 118914, 2023.

     

    [15]    N. Palanivel et al., "Visual recognition system for hearing impairment using internet of things," in AIP Conf. Proc., vol. 3075, no. 1, Jul. 2024.

     

    [16]    K. K. Podder et al., "Signer-independent Arabic Sign Language recognition system using deep learning model," Sensors, vol. 23, no. 16, p. 7156, 2023.

     

    [17]    K. Das et al., "Enhancing Communication Accessibility: UrSL-CNN Approach to Urdu Sign Language Translation for Hearing-Impaired Individuals," CMES-Computer Modeling Eng. Sci., vol. 141, no. 1, 2024.

     

    [18]    Y. Zhang, J. Wang, J. Chen, D. Shi, and X. Chen, "A Space Non-Cooperative Target Recognition Method for Multi-Satellite Cooperative Observation Systems," Remote Sens., vol. 16, no. 18, p. 3368, 2024.

     

    [19]    Basit, J. Mohamad Zain, A. K. Jumaat, N. I. Hamdan, and H. Z. Mojahid, "Predicting COVID-19 trends: a deep dive into time-dependent SIRSD with deep learning technique," Malaysian J. Comput. (MJoC), vol. 9, no. 2, pp. 1955–1978, 2024.

     

    [20]    Hu, K. Song, X. Li, and Y. Wang, "DEMFFA: a multi-strategy modified Fennec Fox algorithm with mixed improved differential evolutionary variation strategies," J. Big Data, vol. 11, no. 1, p. 69, 2024.

     

    [21]    "ASL Dataset". [Online]. Available: https://www.kaggle.com/datasets/ayuraj/asl-dataset.

     

    [22]    M. M. Asiri, A. Motwakel, and S. Drar, "Enhanced Bald Eagle Search Optimizer with Transfer Learning-based Sign Language Recognition for Hearing-impaired Persons," J. Disabil. Res., vol. 2, no. 3, pp. 86–93, 2023.

    Cite This Article As :
    Kaur, Inderjeet. , Udayakumar, P.. , Arundhati, B.. , V., M.. , Almakayeel, Naif. , Akhmetshin, Elvir. Integrating Artificial Intelligence Driven Computer Vision Framework for Enhanced Sign Language Recognition in Hearing and Speech-Impaired People. Journal of Intelligent Systems and Internet of Things, vol. , no. , 2026, pp. 304-314. DOI: https://doi.org/10.54216/JISIoT.180221
    Kaur, I. Udayakumar, P. Arundhati, B. V., M. Almakayeel, N. Akhmetshin, E. (2026). Integrating Artificial Intelligence Driven Computer Vision Framework for Enhanced Sign Language Recognition in Hearing and Speech-Impaired People. Journal of Intelligent Systems and Internet of Things, (), 304-314. DOI: https://doi.org/10.54216/JISIoT.180221
    Kaur, Inderjeet. Udayakumar, P.. Arundhati, B.. V., M.. Almakayeel, Naif. Akhmetshin, Elvir. Integrating Artificial Intelligence Driven Computer Vision Framework for Enhanced Sign Language Recognition in Hearing and Speech-Impaired People. Journal of Intelligent Systems and Internet of Things , no. (2026): 304-314. DOI: https://doi.org/10.54216/JISIoT.180221
    Kaur, I. , Udayakumar, P. , Arundhati, B. , V., M. , Almakayeel, N. , Akhmetshin, E. (2026) . Integrating Artificial Intelligence Driven Computer Vision Framework for Enhanced Sign Language Recognition in Hearing and Speech-Impaired People. Journal of Intelligent Systems and Internet of Things , () , 304-314 . DOI: https://doi.org/10.54216/JISIoT.180221
    Kaur I. , Udayakumar P. , Arundhati B. , V. M. , Almakayeel N. , Akhmetshin E. [2026]. Integrating Artificial Intelligence Driven Computer Vision Framework for Enhanced Sign Language Recognition in Hearing and Speech-Impaired People. Journal of Intelligent Systems and Internet of Things. (): 304-314. DOI: https://doi.org/10.54216/JISIoT.180221
    Kaur, I. Udayakumar, P. Arundhati, B. V., M. Almakayeel, N. Akhmetshin, E. "Integrating Artificial Intelligence Driven Computer Vision Framework for Enhanced Sign Language Recognition in Hearing and Speech-Impaired People," Journal of Intelligent Systems and Internet of Things, vol. , no. , pp. 304-314, 2026. DOI: https://doi.org/10.54216/JISIoT.180221