Volume 21 , Issue 2 , PP: 396-412, 2026 | Cite this article as | XML | Html | PDF | Full Length Article
Lama Al Khuzayem 1 * , Soukeina Elhassen 2
Doi: https://doi.org/10.54216/FPA.210225
Sign language is a vital communication mean for hearing-impaired individuals, combining manual gestures with non-manual signs like facial expressions and body movements, often requiring both hands and sequential actions. Recently, an automatic Sign Language Recognition (SLR) has gained increasing attention, with Machine Learning and Deep Learning systems achieving competitive performance. While convolutional neural network has been widely employed owing to their effectiveness in image-based recognition tasks, existing methods, however, often struggle with efficiency, adaptability, and real-time deployment. This paper proposes an Internet of Things-Integrated Deep Learning Model for Real-Time SLR to enhance the communication among individuals with hearing-impairment and non-signers. The framework employs IoT-based wearable sensors for capturing hand and finger movements, followed by Sobel filtering for noise reduction. MobileNetV3 is applied for lightweight feature extraction, while a Variational AutoEncoder enables robust sign detection. To further improve performance, an Improved Sparrow Search Algorithm is introduced for hyperparameter tuning, constituting the novelty of this work. Experimental results show that the proposed framework achieves an outstanding accuracy of 99.05% when compared to state-of-the-art systems, validating its robustness and effectiveness for real-time SLR applications. Future work will explore large-scale deployment and multi-language adaptability.
Internet of Things , Deep Learning , Sign Language Recognition , Hearing-Impaired , Improved Sparrow Search Algorithm
[1] V. Jain, A. Jain, A. Chauhan, S. S. Kotla, and A. Gautam, "American sign language recognition using support vector machine and convolutional neural network," Int. J. Inf. Technol., vol. 13, pp. 1193–1200, 2021.
[2] K. Bantupalli and Y. Xie, "American Sign Language Recognition using Deep Learning and Computer Vision," in Proc. IEEE Int. Conf. Big Data, 2018, pp. 4896–4899.
[3] S. Hollier and S. Abou-Zahra, "Internet of Things (IoT) as assistive technology: Potential applications in tertiary education," in Proc. 15th Int. Web for All Conf., 2018, pp. 1-4.
[4] J. Hou, X. Y. Li, P. Zhu, Z. Wang, Y. Wang, J. Qian, and P. Yang, "Signspeaker: A real-time, high-precision smartwatch-based sign language translator," in The 25th Annu. Int. Conf. Mobile Comput. Netw., 2019, pp. 1-15.
[5] V. Bhatnagar, R. Chandra, and V. Jain, "IoT based alert system for visually impaired persons," in Emerging Technologies in Computer Engineering: Microservices in Big Data Analytics: Second Int. Conf., ICETCE 2019, Jaipur, India, Feb. 1-2, 2019, Revised Selected Papers 2, 2019, pp. 216-223.
[6] M. Papatsimouli, P. Sarigiannidis, and G. F. Fragulis, "A survey of advancements in real-time sign language translators: integration with IoT technology," Technologies, vol. 11, no. 4, p. 83, 2023.
[7] B. Alsharif and M. Ilyas, "Internet of things technologies in healthcare for people with hearing impairments," in IoT and Big Data Technologies for Health Care. Cham: Springer Nature Switzerland, 2022, pp. 299-308.
[8] T. W. Chong and B. G. Lee, "American sign language recognition using leap motion controller with machine learning approach," Sensors, vol. 18, no. 10, p. 3554, 2018.
[9] Y. Ma, G. Zhou, S. Wang, H. Zhao, and W. Jung, "SignFi: Sign language recognition using WiFi," Proc. ACM Interact., Mobile, Wearable Ubiquitous Technol., vol. 2, no. 1, pp. 1-21, 2018.
[10] Y. Zhang and X. Jiang, "Recent Advances on Deep Learning for Sign Language Recognition," Comput. Model. Eng. Sci., vol. 139, no. 3, 2024.
[11] Papastratis, C. Chatzikonstantinou, D. Konstantinidis, K. Dimitropoulos, and P. Daras, "Artificial intelligence technologies for sign language," Sensors, vol. 21, no. 17, p. 5843, 2021.
[12] M. A. Ahmed, B. B. Zaidan, A. A. Zaidan, M. M. Salih, Z. T. Al-Qaysi, and A. H. Alamoodi, "Based on wearable sensory device in 3D-printed humanoid: A new real-time sign language recognition system," Measurement, vol. 168, p. 108431, 2021.
[13] A. Alhussan, M. M. Eid, and W. H. Lim, "Advancing Communication for the Deaf: A Convolutional Model for Arabic Sign Language Recognition," Full Length Article, vol. 5, no. 1, pp. 38-8, 2023.
[14] S. Sharma, R. Gupta, and A. Kumar, "A TinyML solution for an IoT-based communication device for hearing impaired," Expert Syst. Appl., vol. 246, p. 123147, 2024.
[15] Almjally and W. S. Almukadi, "Deep computer vision with artificial intelligence based sign language recognition to assist hearing and speech-impaired individuals," Sci. Rep., vol. 15, p. 32268, 2025.
[16] M. Goyal and G. Basavarajappa, "A Wearable IoT Based Assistive Device to Aid Communication With Hearing Impaired," in 2023 IEEE Microwaves, Antennas, and Propagation Conf. (MAPCON), 2023, pp. 1-4.
[17] N. Shirisha, D. B. V. Jagannadham, P. Parshapu, S. M. R. Singu, V. S. Kumari, and D. A. Subhahan, "Hand Talk: Sign Language to Text Converter using CNN," in 2024 8th Int. Conf. I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), 2024, pp. 1654-1659.
[18] M. Buttar, U. Ahmad, A. H. Gumaei, A. Assiri, M. A. Akbar, and B. F. Alkhamees, "Deep learning in sign language recognition: a hybrid approach for the recognition of static and dynamic signs," Mathematics, vol. 11, no. 17, p. 3729, 2023.
[19] H. Xu, Y. Zhang, Z. Yang, H. Yan, and X. Wang, "RF-CSign: A Chinese Sign Language Recognition System Based on Large Kernel Convolution and Normalization-Based Attention," IEEE Access, vol. 11, pp. 133767-133780, 2023.
[20] S. A. Shaban and D. L. Elsheweikh, "An Intelligent Android System for Automatic Sign Language Recognition and Learning," J. Adv. Inf. Technol., vol. 15, no. 8, 2024.
[21] B. Senthilnayaki, P. Pandiaraja, P. Gupta, S. Aluvala, and T. Stephan, "Enhanced Health Monitoring Using IoT-Embedded Smart Glove and Machine Learning," in 2023 3rd Int. Conf. Innovative Sustainable Comput. Technol. (CISCT), 2023, pp. 1-5.
[22] Revathi, N. Sasikaladevi, D. Arunprasanth, and N. Raju, "Raspberry Pi-based robust speech command recognition for normal and hearing-impaired (HI)," Multimedia Tools Appl., vol. 83, no. 17, pp. 51589-51613, 2024.
[23] L. Al Khuzayem, S. Shafi, S. Aljahdali, R. Alkhamesie, and O. Alzamzami, "Efhamni: A deep learning-based Saudi sign language recognition application," Sensors, vol. 24, no. 10, p. 3112, 2024.
[24] M. Maashi, H. G. Iskandar, and M. Rizwanullah, "IoT-driven smart assistive communication system for the hearing impaired with hybrid deep learning models for sign language recognition," Sci. Rep., vol. 15, p. 6192, 2025.
[25] S. Elhassen and L. Al Khuzayem, "Bridging information science and deep learning: Transformer models for isolated Saudi Sign Language recognition," Appl. Math. Inf. Sci., vol. 19, no. 6, pp. 1345–1357, 2025.
[26] Sharifrazi, R. Alizadehsani, M. Roshanzamir, J. H. Joloudari, A. Shoeibi, M. Jafari, S. Hussain, Z. A. Sani, F. Hasanzadeh, F. Khozeimeh, and A. Khosravi, "Fusion of convolution neural network, support vector machine and Sobel filter for accurate detection of COVID-19 patients using X-ray images," Biomed. Signal Process. Control, vol. 68, p. 102622, 2021.
[27] J. Di, W. Guo, J. Liu, L. Ren, and J. Lian, "AMMNet: A multimodal medical image fusion method based on an attention mechanism and MobileNetV3," Biomed. Signal Process. Control, vol. 96, p. 106561, 2024.
[28] C. Y. Wang, A. Bochkovskiy, and H. Y. M. Liao, "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2023, pp. 7464-7475.
[29] R. Zhang, S. Wang, T. Xie, S. Duan, and M. Chen, "Dynamic User Interface Generation for Enhanced Human-Computer Interaction Using Variational Autoencoders," arXiv preprint arXiv: 2412.14521, 2024.
[30] L. Kezar, Z. Sehyr, and J. Thomason, "Phonological Representation Learning for Isolated Signs Improves Out-of-Vocabulary Generalization," arXiv preprint arXiv: 2509.04745, 2025.
[31] S. Ye, B. Da, L. Qi, H. Xiao, and S. Li, "Condition Monitoring of Marine Diesel Lubrication System Based on an Optimized Random Singular Value Decomposition Model," Machines, vol. 13, no. 1, p. 7, 2025.
[32] Y. Fan, Y. Zhang, B. Guo, X. Luo, Q. Peng, and Z. Jin, "A hybrid sparrow search algorithm of the hyperparameter optimization in deep learning," Mathematics, vol. 10, no. 16, p. 3019, 2022.
[33] "Sign Language Gesture Images Dataset," Kaggle. [Online]. Available: https://www.kaggle.com/datasets/ahmedkhanak1995/sign-language-gesture-images-dataset
[34] C. Dong, M. C. Leu, and Z. Yin, "American sign language alphabet recognition using microsoft kinect," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, Boston, MA, USA, Jun. 2015, pp. 44–52.
[35] N. Kasukurthi, B. Rokad, S. Bidani, and D. Dennisan, "American Sign Language Alphabet Recognition using Deep Learning," arXiv preprint arXiv: 1905.05487, 2019.
[36] W. Tao, M. C. Leu, and Z. Yin, "American Sign Language alphabet recognition using Convolutional Neural Networks with multiview augmentation and inference fusion," Eng. Appl. Artif. Intell., vol. 76, pp. 202–213, 2018.
[37] R. Rastgoo, K. Kiani, and S. Escalera, "Multi-modal deep hand sign language recognition in still images using restricted Boltzmann machine," Entropy, vol. 20, p. 809, 2018.
[38] N. Aslam, K. Abid, and S. Munir, "Robot assist sign language recognition for hearing impaired persons using deep learning," VAWKUM Trans. Comput. Sci., vol. 11, no. 1, pp. 245–267, 2023.
[39] Baihan, A. I. Alutaibi, and M. Alshehri, "Sign language recognition using modified deep learning network and hybrid optimization: a hybrid optimizer (HO) based optimized CNNSa-LSTM approach," Sci. Rep., vol. 14, p. 26111, 2024.
[40] S. Mohsin, B. W. Salim, A. K. Mohamedsaeed, B. F. Ibrahim, and S. R. Zeebaree, "American sign language recognition based on transfer learning algorithms," Int. J. Intell. Syst. Appl. Eng., vol. 12, no. 5s, pp. 390–399, 2024.
[41] National Geographic Society, "Sign Language." [Online]. Available: https://education.nationalgeographic.org/resource/sign-language/#