86 70
Full Length Article
Fusion: Practice and Applications
Volume 15 , Issue 2, PP: 196-207 , 2024 | Cite this article as | XML | Html |PDF

Title

Hybrid CNN-XGB Framework for Enhancing Human Activity Recognition

  Farah Hatem Khorsheed 1 * ,   Raniah Hazim 2 ,   Sarah. A. hassan 3 ,   Qusay Saihood 4

1  Department of computer engineering, Collage of engineering, University of Diyala, Iraq
    (farah_hatam@uodiyala.edu.iq)

2  Department of computer engineering, Collage of engineering, University of Diyala, Iraq
    (Rania_hazim_enge@uodiyala.edu.iq)

3  Department of computer engineering, Collage of engineering, University of Diyala, Iraq
    (Sarah.amir@uodiyala.edu.iq)

4  Prosthetic Dental Techniques Department, College of Health and Medical Techniques, Ashur University, Baghdad, Iraq
    (qusaysaihood@au.edu.iq)


Doi   :   https://doi.org/10.54216/FPA.150218

Received: August 02, 2023 Revised: December 09, 2023 Accepted: April 07, 2024

Abstract :

Human Activity Recognition (HAR) is one of the most important modern research fields concerned with studying and analyzing human actions and behaviors. Human activity recognition applications offer great potential for a wide range of applications in various fields that enhance health, safety, and efficiency. Due to the diversity of human activities and the way people carry out these activities, it is difficult to recognize human activity. The amazing capabilities provided By Artificial Intelligence (AI) tools in analyzing and understanding hidden patterns in complex data can greatly facilitate the HAR process. There has been a huge trend in the past 10 years to use Machine Learning (ML) and Deep Learning (DL) techniques to analyze and understand big data for HAR. Although there are many studies using these techniques, their accuracy still needs to be further improved due to several challenges: Data complexity, class imbalance, determining the appropriate feature selection technique with ML technique, and tuning the hyperparameters of the used ML technique. To overcome these challenges, this study proposes an effective framework based on two stages: a data preprocessing procedure that includes data balance and data normalization. Then, a hybrid CNN-XGB model combining Convolutional Neural Network (CNN) and a fine-tuned XGBoost (XGB) classifier is developed for accurate HAR. The CNN-XGB model achieved excellent results in HAR when trained and tested on the HCI-HAR dataset, achieving an accuracy of up to 99.0%. Effectively HAR provides the opportunity to apply many applications that contribute to improving the quality of life in various areas of our daily lives.

Keywords :

Human Activity Recognition; Machine Learning; Deep Learning; Convolutional Neural Network; XGBoost; HCI-HAR dataset.

References :

[1]  Saeed, V. A. (2024). A Framework for Recognition of Facial Expression Using HOG Features. International Journal of Mathematics, Statistics, and Computer Science, 2, 1–8. https://doi.org/10.59543/ijmscs.v2i.7815.

[2]  H. M. Salman, H. F. Hamdan, R. Khalid, and S. Al Al-Kikani, “Physical Activity Monitoring for Older Adults through IoT and Wearable Devices: Leveraging Data Fusion Techniques,” Fusion: Practice and Applications, vol. 11, no. 2, pp. 48–61, 2023, doi: 10.54216/FPA.110204.

[3]  .. R., .. S. S., and A. Sharma, “Predicting Student Performance Using Educational Data Mining and Learning Analytics Technique,” Journal of Intelligent Systems and Internet of Things, vol. 10, no. 2, pp. 24–37, 2023, doi: 10.54216/JISIoT.100203.

[4]  Zeebaree, I. M., & Kareem, O. S. (2024). Face Mask Detection Using Haar Cascades Classifier To Reduce The Risk Of Coved-19. International Journal of Mathematics, Statistics, and Computer Science, 2, 19–27. https://doi.org/10.59543/ijmscs.v2i.7845.

[5]  Y. Zhang, Z. Deng, X. Xu, Y. Feng, and S. Junliang, “Application of Artificial Intelligence in Drug–Drug Interactions Prediction: A Review,” J Chem Inf Model, vol. 64, no. 7, pp. 2158–2173, Apr. 2024, doi: 10.1021/acs.jcim.3c00582.

[6]  K. Shukla, K. Idanwekhai, M. Naradikian, S. Ting, S. P. Schoenberger, and E. Brunk, “Machine Learning of Three-Dimensional Protein Structures to Predict the Functional Impacts of Genome Variation,” J Chem Inf Model, Apr. 2024, doi: 10.1021/acs.jcim.3c01967.

[7]  N. Jaouedi, N. Boujnah, and M. S. Bouhlel, “A new hybrid deep learning model for human action recognition,” Journal of King Saud University - Computer and Information Sciences, vol. 32, no. 4, pp. 447–453, May 2020, doi: 10.1016/j.jksuci.2019.09.004.

[8]  L. Zhang, X. Wu, and D. Luo, “Human activity recognition with HMM-DNN model,” in 2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC), IEEE, Jul. 2015, pp. 192–197. doi: 10.1109/ICCI-CC.2015.7259385.

[9]  S. Wan, L. Qi, X. Xu, C. Tong, and Z. Gu, “Deep Learning Models for Real-time Human Activity Recognition with Smartphones,” Mobile Networks and Applications, vol. 25, no. 2, pp. 743–755, Apr. 2020, doi: 10.1007/s11036-019-01445-x.

[10] M. G. Morshed, T. Sultana, A. Alam, and Y.-K. Lee, “Human Action Recognition: A Taxonomy-Based Survey, Updates, and Opportunities,” Sensors, vol. 23, no. 4, p. 2182, Feb. 2023, doi: 10.3390/s23042182.

[11] F. Serpush and M. Rezaei, “Complex Human Action Recognition Using a Hierarchical Feature Reduction and Deep Learning-Based Method,” SN Comput Sci, vol. 2, no. 2, p. 94, Apr. 2021, doi: 10.1007/s42979-021-00484-0.

[12] N. Tasnim, M. K. Islam, and J.-H. Baek, “Deep Learning Based Human Activity Recognition Using Spatio-Temporal Image Formation of Skeleton Joints,” Applied Sciences, vol. 11, no. 6, p. 2675, Mar. 2021, doi: 10.3390/app11062675.

[13] X. Yu et al., “Deep Ensemble Learning for Human Action Recognition in Still Images,” Complexity, vol. 2020, pp. 1–23, Jan. 2020, doi: 10.1155/2020/9428612.

[14] Md. Ferdouse Ahmed Foysal, M. Shakirul Islam, S. Abujar, and S. Akhter Hossain, “A Novel Approach for Tomato Diseases Classification Based on Deep Convolutional Neural Networks,” 2020, pp. 583–591. doi: 10.1007/978-981-13-7564-4_49.

[15] E. Mathe, A. Maniatis, E. Spyrou, and P. Mylonas, “A Deep Learning Approach for Human Action Recognition Using Skeletal Information,” 2020, pp. 105–114. doi: 10.1007/978-3-030-32622-7_9.

[16] Z. Zhang, Z. Lv, C. Gan, and Q. Zhu, “Human action recognition using convolutional LSTM and fully-connected LSTM with different attentions,” Neurocomputing, vol. 410, pp. 304–316, Oct. 2020, doi: 10.1016/j.neucom.2020.06.032.

[17] S. Satpathy, P. Mohan, S. Das, and S. Debbarma, “A new healthcare diagnosis system using an IoT-based fuzzy classifier with FPGA,” J Supercomput, vol. 76, no. 8, pp. 5849–5861, Aug. 2020, doi: 10.1007/s11227-019-03013-2.

[18] Y. Sun, Y. Han, and J. Fan, “Laplacian-based Cluster-Contractive t-SNE for High-Dimensional Data Visualization,” ACM Trans Knowl Discov Data, vol. 18, no. 1, pp. 1–22, Jan. 2024, doi: 10.1145/3612932.

[19] Edmundo Jalon Arias, Luz M. Aguirre Paz, Luis Molina Chalacan, Multi-Sensor Data Fusion for Accurate Human Activity Recognition with Deep Learning, Journal of Fusion: Practice and Applications, Vol. 13 , No. 2 , (2023) : 62-70 (Doi   :  https://doi.org/10.54216/FPA.130206).

[20] Ahmed Aziz, Sanjar Mirzaliev, Yuldashev Maqsudjon, Real-time Monitoring of Activity Recognition in Smart Homes: An Intelligent IoT Framework, Journal of Journal of Intelligent Systems and Internet of Things, Vol. 10 , No. 1 , (2023) : 76-83 (Doi   :  https://doi.org/10.54216/JISIoT.100106).

[21] Vikas panthi, Amit Kumar Mishra, Enhancing Healthcare Monitoring through the Integration of IoT Networks and Machine Learning, Journal of International Journal of Wireless and Ad Hoc Communication, Vol. 7 , No. 1 , (2023) : 28-39 (Doi   :  https://doi.org/10.54216/IJWAC.070103).

[22] W. Zhang, C. Wu, H. Zhong, Y. Li, and L. Wang, “Prediction of undrained shear strength using extreme gradient boosting and random forest based on Bayesian optimization,” Geoscience Frontiers, vol. 12, no. 1, pp. 469–477, Jan. 2021, doi: 10.1016/j.gsf.2020.03.007.

[23] “ESANN 2013 - proceedings | ESANN 2024.” Accessed: Apr. 19, 2024. [Online]. Available: https://www.esann.org/proceedings/2013

[24] Q. Saihood and E. Sonuç, “The Efficiency of Classification Techniques in Predicting Anemia Among Children: A Comparative Study,” 2022, pp. 167–181. doi: 10.1007/978-3-030-97255-4_12.

[25] Q. SAIHOOD and E. SONUÇ, “A practical framework for early detection of diabetes using ensemble machine learning models,” Turkish Journal of Electrical Engineering and Computer Sciences, vol. 31, no. 4, pp. 722–738, Jul. 2023, doi: 10.55730/1300-0632.4013.

 

 


Cite this Article as :
Style #
MLA Farah Hatem Khorsheed, Raniah Hazim, Sarah. A. hassan, Qusay Saihood. "Hybrid CNN-XGB Framework for Enhancing Human Activity Recognition." Fusion: Practice and Applications, Vol. 15, No. 2, 2024 ,PP. 196-207 (Doi   :  https://doi.org/10.54216/FPA.150218)
APA Farah Hatem Khorsheed, Raniah Hazim, Sarah. A. hassan, Qusay Saihood. (2024). Hybrid CNN-XGB Framework for Enhancing Human Activity Recognition. Journal of Fusion: Practice and Applications, 15 ( 2 ), 196-207 (Doi   :  https://doi.org/10.54216/FPA.150218)
Chicago Farah Hatem Khorsheed, Raniah Hazim, Sarah. A. hassan, Qusay Saihood. "Hybrid CNN-XGB Framework for Enhancing Human Activity Recognition." Journal of Fusion: Practice and Applications, 15 no. 2 (2024): 196-207 (Doi   :  https://doi.org/10.54216/FPA.150218)
Harvard Farah Hatem Khorsheed, Raniah Hazim, Sarah. A. hassan, Qusay Saihood. (2024). Hybrid CNN-XGB Framework for Enhancing Human Activity Recognition. Journal of Fusion: Practice and Applications, 15 ( 2 ), 196-207 (Doi   :  https://doi.org/10.54216/FPA.150218)
Vancouver Farah Hatem Khorsheed, Raniah Hazim, Sarah. A. hassan, Qusay Saihood. Hybrid CNN-XGB Framework for Enhancing Human Activity Recognition. Journal of Fusion: Practice and Applications, (2024); 15 ( 2 ): 196-207 (Doi   :  https://doi.org/10.54216/FPA.150218)
IEEE Farah Hatem Khorsheed, Raniah Hazim, Sarah. A. hassan, Qusay Saihood, Hybrid CNN-XGB Framework for Enhancing Human Activity Recognition, Journal of Fusion: Practice and Applications, Vol. 15 , No. 2 , (2024) : 196-207 (Doi   :  https://doi.org/10.54216/FPA.150218)