Fusion: Practice and Applications

Journal DOI

https://doi.org/10.54216/FPA

Submit Your Paper

2692-4048ISSN (Online) 2770-0070ISSN (Print)

Volume 19 , Issue 1 , PP: 10-22, 2025 | Cite this article as | XML | Html | PDF | Full Length Article

Comprehensive Methodology to the Detection and Classification of Emotion in Human Face using EMOTE-Net

Asif Hussain Shaik 1 * , Shaik Karimullah 2 , Mudassir Khan 3 , Fahimuddin Shaik 4

  • 1 Technology Transfer Officer, Department of ECE, Middle East College, Muscat, Oman - (shussain@mec.edu.om)
  • 2 Department of ECE, Annamacharya Institute of Technology and Sciences, Rajampet, Andhra Pradesh, India - (Munnu483@gmail.com)
  • 3 Department of Computer Science at College of Computer Science, Applied College Tanumah, King Khalid University Abha, Saudi Arabia - (mudassirkhan12@gmail.com)
  • 4 Department of ECE, Annamacharya Institute of Technology and Sciences, Rajampet, Andhra Pradesh, India - (fahimaits@gmail.com)
  • Doi: https://doi.org/10.54216/FPA.190102

    Received: November 14, 2024 Revised: January 10, 2025 Accepted: February 05, 2025
    Abstract

    Presenting the network architecture EMOTE-Net is a method of enhancing the face emotion recognition and classification in video data for this work. The suggested model merges the use of DenseNet to extract features with the SVM (support vector machine) to categorize the data by specifying SVM here. This feature of EMOTE-Net is highly outstanding because SVM and DenseNet are combined and are thus capable of sophisticated classification and effective feature extraction. The first process to come in methodology is preprocessing of video data. Bounding Box detection is able to extract regions that are of interests (ROIs) and that Densenet is great at the feature representation with high dimensions. Henceforth, feed these features into a classifier from SVM for intelligent categorization. Evaluation has provided clear evidence regarding the efficiency of this model, which has obtained the accuracy of 0.9890, precision of 0.9900, sensitivity of 0.9877, specificity of 0.9972, and F1 score of 0.9886. The pertinence of EMOTE-Net to real life applications, such as video analytics, human-computer interaction, and surveillance, will be highlighted in the chapter through the references from the installation and evaluation processes. The work presents a viable approach for object detection and classification in changeful visual arenas.

    Keywords :

    Computer Vision , Bounding Box Detection , Video Analysis , Region of Interest , DenseNet , SVM , Deep Learning

    References

    [1]      D. K. Jain, P. Shamsolmoali, and P. Sehdev, “Extended deep neural network for facial emotion recognition,” Pattern Recognit. Lett, vol. 120, pp. 69–74, 2019.

    [2]      B. R. Ilyas, B. Mohammed, M. Khaled, and K. Miloud, “Enhanced face recognition system based on deep CNN,” in Proc. 6th Int. Conf. Image Signal Process. Appl. (ISPA), Mostaganem, Algeria, 2019, pp. 1–6.

    [3]      S. Zhang, X. Pan, Y. Cui, X. Zhao, and L. Liu, “Learning affective video features for facial expression recognition via hybrid deep learning,” IEEE Access, vol. 7, pp. 32297–32304, 2019.

    [4]      M. S. Shakeel and K.-M. Lam, “Deep-feature encoding-based discriminative model for age-invariant face recognition,” Pattern Recognit., vol. 93, pp. 442–457, 2019.

    [5]      A. Jaiswal, A. K. Raju, and S. Deb, “Facial emotion detection using deep learning,” in Proc. Int. Conf. Emerg. Technol. (INCET), Belgaum, India, 2020, pp. 1–5.

    [6]      P. Kedari, M. Kapile, D. Kadole, and S. Jaikar, “Face emotion detection using deep learning,” in Proc. 2nd Int. Conf. Adv. Comput., Commun., Embedded Secure Syst. (ACCESS), Ernakulam, India, 2021, pp. 118–123.

    [7]      M. A. H. Akhand, S. Roy, N. Siddique, M. A. S. Kamal, and T. Shimamura, “Facial emotion recognition using transfer learning in the deep CNN,” Electronics, vol. 10, no. 9, p. 1036, 2021.

    [8]      T. Dar, A. Javed, S. Bourouis, H. S. Hussein, and H. Alshazly, “Efficient-SwishNet based system for facial emotion recognition,” IEEE Access, vol. 10, pp. 71311–71328, 2022.

    [9]      S. Saeed et al., “Automated facial expression recognition framework using deep learning,” J. Healthcare Eng., vol. 2022, p. 11, 2022.

    [10]   M. Mukhriddin, O. Djuraev, F. Akhmedov, A. Mukhamadiyev, and J. Cho, “Masked face emotion recognition based on facial landmarks and deep learning approaches for visually impaired people,” Sensors, vol. 23, no. 3, p. 1080, 2023.

    [11]   S. Tewari, S. Mehta, and N. Srinivasan, “IIMI emotional face database,” OSF, 2023.

    [12]   Ç. Menzil et al., “A business process for detecting facial movements and emotions using deep learning techniques,” in Proc. Int. Conf. Electr., Commun. Comput. Eng. (ICECCE), vol. 12, no. 4, 2023, pp. 5148–5163.

    [13]   H. N. AlEisa et al., “Henry gas solubility optimization with deep learning based facial emotion recognition for human-computer interface,” IEEE Access, vol. 11, pp. 62233–62241, 2023.

    [14]   R. A. Khan, “Facial emotion recognition using conventional machine learning and deep learning methods: Current achievements, analysis, and remaining challenges,” Information, vol. 13, p. 268, 2022.

    [15]   S. Saeed et al., “Automated facial expression recognition framework using deep learning,” J. Healthcare Eng., vol. 2022, pp. 2040–2295, 2022.

    [16]   T. Saikia, L. Birla, A. K. Gupta, and P. Gupta, “HREADAI: Heart rate estimation from face mask videos by consolidating Eulerian and Lagrangian approaches,” IEEE Trans. Instrum. Meas., 2024.

    [17]   A. R. Khan, “Facial emotion recognition using conventional machine learning and deep learning methods: Current achievements, analysis, and remaining challenges,” Information, 2022.

    [18]   H. Irfan, H.-J. Yang, G.-S. Lee, and S.-H. Kim, “Robust human face emotion classification using triplet-loss-based deep CNN features and SVM,” Sensors, vol. 23, no. 10, p. 4770, 2023.

    [19]   A. Javaid et al., “Force sensitive resistors-based real-time posture detection system using machine learning algorithms,” Comput. Mater. Continua, vol. 77, no. 2, pp. 1795–1814, 2023.

    [20]   Ruchi et al., “Lumbar spine disease detection: Enhanced CNN model with improved classification accuracy,” IEEE Access, vol. 11, pp. 141889–141901, 2023.

    Cite This Article As :
    Hussain, Asif. , Karimullah, Shaik. , Khan, Mudassir. , Shaik, Fahimuddin. Comprehensive Methodology to the Detection and Classification of Emotion in Human Face using EMOTE-Net. Fusion: Practice and Applications, vol. , no. , 2025, pp. 10-22. DOI: https://doi.org/10.54216/FPA.190102
    Hussain, A. Karimullah, S. Khan, M. Shaik, F. (2025). Comprehensive Methodology to the Detection and Classification of Emotion in Human Face using EMOTE-Net. Fusion: Practice and Applications, (), 10-22. DOI: https://doi.org/10.54216/FPA.190102
    Hussain, Asif. Karimullah, Shaik. Khan, Mudassir. Shaik, Fahimuddin. Comprehensive Methodology to the Detection and Classification of Emotion in Human Face using EMOTE-Net. Fusion: Practice and Applications , no. (2025): 10-22. DOI: https://doi.org/10.54216/FPA.190102
    Hussain, A. , Karimullah, S. , Khan, M. , Shaik, F. (2025) . Comprehensive Methodology to the Detection and Classification of Emotion in Human Face using EMOTE-Net. Fusion: Practice and Applications , () , 10-22 . DOI: https://doi.org/10.54216/FPA.190102
    Hussain A. , Karimullah S. , Khan M. , Shaik F. [2025]. Comprehensive Methodology to the Detection and Classification of Emotion in Human Face using EMOTE-Net. Fusion: Practice and Applications. (): 10-22. DOI: https://doi.org/10.54216/FPA.190102
    Hussain, A. Karimullah, S. Khan, M. Shaik, F. "Comprehensive Methodology to the Detection and Classification of Emotion in Human Face using EMOTE-Net," Fusion: Practice and Applications, vol. , no. , pp. 10-22, 2025. DOI: https://doi.org/10.54216/FPA.190102