Fusion: Practice and Applications

Journal DOI

https://doi.org/10.54216/FPA

Submit Your Paper

2692-4048ISSN (Online) 2770-0070ISSN (Print)

Volume 16 , Issue 1 , PP: 23-36, 2024 | Cite this article as | XML | Html | PDF | Full Length Article

Automated Gesture Recognition Using Zebra Optimization Algorithm with Deep Learning Model for Visually Challenged People

Mohammed Basheri 1 *

  • 1 Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia - (mbasheri@kau.edu.sa)
  • Doi: https://doi.org/10.54216/FPA.160102

    Received: July 29, 2023 Revised: November 17, 2023 Accepted: April 12, 2024
    Abstract

    Gesture recognition for visually challenged people plays a vital role in improving their convenience and interaction with digital gadgets and environments. It includes improvement of systems that permit them to relate with digital devices by using hand actions or gestures. To improve user-friendliness, these systems select in-built and effortlessly learnable gestures, often integrating wearable devices prepared with sensors for precise detection. Incorporating auditory or haptic feedback devices offers real-time cues about achievement of familiar gestures. Machine learning (ML) and deep learning (DL) methods are useful tools for accurate gesture detection, with customization choices to accommodate individual preferences. In this view, this article concentrates on design and development of Automated Gesture Recognition using Zebra Optimization Algorithm with Deep Learning (AGR-ZOADL) model for Visually Challenged People. The AGR-ZOADL technique aims to recognize the gestures to aid visually challenged people. In the AGR-ZOADL technique, the primary level of data pre-processing is involved by median filtering (MF). Besides, the AGR-ZOADL technique applies NASNet model to learn complex features from the preprocessed data. To enhance performance of NASNet technique, ZOA based hyperparameter procedure performed. For gesture recognition process, stacked long short term memory (SLSTM) model is applied. The performance validation of AGR-ZOADL technique carried out using a benchmark dataset. The experimental values stated that AGR-ZOADL methodology extents significant performance over other present approaches

    Keywords :

    Gesture Recognition , Visually Challenged People , Deep Learning , Zebra Optimization Algorithm , Artificial Intelligence

    References

     

    [1]     Dhivyasri, S., KB, K.H., Akash, M., Sona, M., Divyapriya, S. and Krishnaveni, V., 2021, May. An efficient approach for interpretation of Indian sign language using machine learning. In 2021 3rd International Conference on Signal Processing and Communication (ICPSC) (pp. 130-133). IEEE.

    [2]     Maro, J.M., Ieng, S.H. and Benosman, R., 2020. Event-based gesture recognition with dynamic background suppression using smartphone computational capabilities. Frontiers in neuroscience, 14, p.275.

    [3]     Yuksel, B.F., Fazli, P., Mathur, U., Bisht, V., Kim, S.J., Lee, J.J., Jin, S.J., Siu, Y.T., Miele, J.A. and Yoon, I., 2020, July. Human-in-the-loop machine learning to increase video accessibility for visually impaired and blind users. In Proceedings of the 2020 ACM Designing Interactive Systems Conference (pp. 47-60).

    [4]     Siddiqui, U.A., Ullah, F., Iqbal, A., Khan, A., Ullah, R., Paracha, S., Shahzad, H. and Kwak, K.S., 2021. Wearable-sensors-based platform for gesture recognition of autism spectrum disorder children using machine learning algorithms. Sensors, 21(10), p.3319.

    [5]     Gupta, S., Chakraborti, S., Yogitha, R. and Mathivanan, G., 2022, May. Object Detection with Audio Comments Using YOLO V3. In 2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC) (pp. 903-909). IEEE.

    [6]     Fatmi, R., Rashad, S. and Integlia, R., 2019, January. Comparing ANN, SVM, and HMM based machine learning methods for American sign language recognition using wearable motion sensors. In 2019 IEEE 9th annual computing and communication workshop and conference (CCWC) (pp. 0290-0297). IEEE.

    [7]     Aithal, C.N., Ishwarya, P., Sneha, S., Yashvardhan, C.N., Kumar, D. and Suresh, K.V., 2023, January. Hand Gesture Recognition in Complex Background. In Cognition and Recognition: 8th International Conference, ICCR 2021, Mandya, India, December 30–31, 2021, Revised Selected Papers (pp. 243-257). Cham: Springer Nature Switzerland.

    [8]     Afif, M., Ayachi, R., Pissaloux, E., Said, Y. and Atri, M., 2020. Indoor objects detection and recognition for an ICT mobility assistance of visually impaired people. Multimedia Tools and Applications, 79, pp.31645-31662.

    [9]     Chen, B., Chen, C., Hu, J., Sayeed, Z., Qi, J., Darwiche, H.F., Little, B.E., Lou, S., Darwish, M., Foote, C. and Palacio-Lascano, C., 2022. Computer Vision and Machine Learning-Based Gait Pattern Recognition for Flat Fall Prediction. Sensors, 22(20), p.7960.

    [10]   Jadhav, A., Padwad, H., Chandak, M.B. and Raut, R., 2022. Use of Assistive Techniques for the Visually Impaired People. Intelligent Systems for Rehabilitation Engineering, pp.115-127.

    [11]   Subudhi, B.N., Veerakumar, T., Harathas, S.R., Prabhudesai, R., Kuppili, V. and Jakhetiya, V., 2023. Deep Learning in Autoencoder Framework and Shape Prior for Hand Gesture Recognition. In Smart Computer Vision (pp. 223-242). Cham: Springer International Publishing.

    [12]   Adithya, V. and Rajesh, R., 2020. A deep convolutional neural network approach for static hand gesture recognition. Procedia Computer Science, 171, pp.2353-2361.

    [13]   Mujahid, A., Awan, M.J., Yasin, A., Mohammed, M.A., Damaševičius, R., Maskeliūnas, R. and Abdulkareem, K.H., 2021. Real-time hand gesture recognition based on deep learning YOLOv3 model. Applied Sciences, 11(9), p.4164.

    [14]   Alashhab, S., Gallego, A.J. and Lozano, M.Á., 2022. Efficient gesture recognition for the assistance of visually impaired people using multi-head neural networks. Engineering Applications of Artificial Intelligence, 114, p.105188.

    [15]   Adeel, M.I., Asad, M.A., Zeeshan, M.R., Amna, M., Aslam, M. and Martinez-Enriquez, A.M., 2022, March. Gesture Based Confidence Assessment System for Visually Impaired People Using Deep Learning. In Future of Information and Communication Conference (pp. 135-147). Cham: Springer International Publishing.

    [16]   Can, C., Kaya, Y. and Kılıç, F., 2021. A deep convolutional neural network model for hand gesture recognition in 2D near-infrared images. Biomedical Physics & Engineering Express, 7(5), p.055005.

    [17]   Ryumin, D., Ivanko, D. and Ryumina, E., 2023. Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices. Sensors, 23(4), p.2284.

    [18]   Parvathy, P., Subramaniam, K., Prasanna Venkatesan, G.K.D., Karthikaikumar, P., Varghese, J. and Jayasankar, T., 2021. Development of hand gesture recognition system using machine learning. Journal of Ambient Intelligence and Humanized Computing, 12, pp.6793-6800.

    [19]   Singh, K., Kansal, A. and Singh, G., 2019. An improved median filtering anti-forensics with better image quality and forensic undetectability. Multidimensional Systems and Signal Processing, 30(4), pp.1951-1974.

    [20]   Hazarika, R.A., Kandar, D. and Maji, A.K., 2022. An experimental analysis of different deep learning based models for Alzheimer’s disease classification using brain magnetic resonance images. Journal of King Saud University-Computer and Information Sciences, 34(10), pp.8576-8598.

    [21]   Trojovská, E., Dehghani, M. and Trojovský, P., 2022. Zebra optimization algorithm: A new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access, 10, pp.49445-49473.

    [22]   Shi, Y.F., Yang, C., Wang, J., Zheng, Y., Meng, F.Y. and Chernogor, L.F., 2023. A Hybrid Deep Learning‐Based Forecasting Model for the Peak Height of Ionospheric F2 Layer. Space Weather, 21(10), p.e2023SW003581.

    [23]   https://www.dlsi.ua.es/~jgallego/datasets/gestures/

    Cite This Article As :
    Basheri, Mohammed. Automated Gesture Recognition Using Zebra Optimization Algorithm with Deep Learning Model for Visually Challenged People. Fusion: Practice and Applications, vol. , no. , 2024, pp. 23-36. DOI: https://doi.org/10.54216/FPA.160102
    Basheri, M. (2024). Automated Gesture Recognition Using Zebra Optimization Algorithm with Deep Learning Model for Visually Challenged People. Fusion: Practice and Applications, (), 23-36. DOI: https://doi.org/10.54216/FPA.160102
    Basheri, Mohammed. Automated Gesture Recognition Using Zebra Optimization Algorithm with Deep Learning Model for Visually Challenged People. Fusion: Practice and Applications , no. (2024): 23-36. DOI: https://doi.org/10.54216/FPA.160102
    Basheri, M. (2024) . Automated Gesture Recognition Using Zebra Optimization Algorithm with Deep Learning Model for Visually Challenged People. Fusion: Practice and Applications , () , 23-36 . DOI: https://doi.org/10.54216/FPA.160102
    Basheri M. [2024]. Automated Gesture Recognition Using Zebra Optimization Algorithm with Deep Learning Model for Visually Challenged People. Fusion: Practice and Applications. (): 23-36. DOI: https://doi.org/10.54216/FPA.160102
    Basheri, M. "Automated Gesture Recognition Using Zebra Optimization Algorithm with Deep Learning Model for Visually Challenged People," Fusion: Practice and Applications, vol. , no. , pp. 23-36, 2024. DOI: https://doi.org/10.54216/FPA.160102