Journal of Cybersecurity and Information Management

Journal DOI

https://doi.org/10.54216/JCIM

Submit Your Paper

2690-6775ISSN (Online) 2769-7851ISSN (Print)

Volume 13 , Issue 2 , PP: 109-123, 2024 | Cite this article as | XML | Html | PDF | Full Length Article

Hybridization of Deep Sequential Network for Emotion Recognition Using Unconstraint Video Analysis

P. Naga Bhushanam 1 , Selva Kumar S. 2 *

  • 1 School of Computer Science and Engineering, VIT-AP University, Andhra Pradesh, India - (nagabhushnam.22phd7012@vitap.ac.in)
  • 2 School of Computer Science and Engineering, VIT-AP University, Andhra Pradesh, India - (selvakumar.s@vitap.ac.in)
  • Doi: https://doi.org/10.54216/JCIM.130209

    Received: January 05, 2024 Revised: Mrach 09, 2024 Accepted: May 04, 2024
    Abstract

    The reliable way to discern human emotions in various circumstances has been proven to be through facial expressions. Facial expression recognition (FER) has emerged as a research topic to identify various essential emotions in the present exponential rise in research for emotion detection. Happiness is one of these basic emotions everyone may experience, and facial expressions are better at detecting it than other emotion-measuring methods. Most techniques have been designed to recognize various emotions to achieve the highest level of general precision. Maximizing the recognition accuracy for a particular emotion is challenging for researchers. Some techniques exist to identify a single happy mood recorded in unrestricted video. Still, they are all limited by the processing of extreme head posture fluctuations that they need to consider, and their accuracy still needs to be improved. This research proposes a novel hybrid facial emotion recognition using unconstraint video to improve accuracy. Here, a Deep Belief Network (DBN) with long short-term memory (LSTM) is employed to extract dynamic data from the video frames. The experiments conducted uses decision-level and feature-level fusion techniques are applied unconstrained video dataset. The outcomes show that the proposed hybrid approach may be more precise than some existing facial expression models.

    Keywords :

    Unconstrained video , emotion recognition , prediction , network model , feature representation

    References

    [1] Whitehill, Z. Serpell, Y.-C. Lin, A. Foster, and J. R. Movellan, “The faces of engagement: Automatic recognition of student engagement from facial expressions,” IEEE Transactions on Affective Computing, vol. 5, no. 1, pp. 86–98, 2014.

    [2] Soleymani and M. Pantic, “Emotionally aware TV," in Proceedings of TVUX-2013: Workshop on Exploring and Enhancing the User Experience for TV at ACM CHI, 2013.

    [3] Cockburn, M. Bartlett, J. Tanaka, J. Movellan, M. Pierce, and R. Schultz, “Smilemaze: A tutoring system in real-time facial expression perception and production in children with autism spectrum disorder,” in ECAG 2008 Workshop Facial and Bodily Expressions for Control and Adaptation of Games. Citeseer, 2008.

    [4] Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, p. 2012.

    [5] Mollahosseini, D. Chan, and M. H. Mahoor, "Going deeper in facial expression recognition using deep neural networks," 2016.

    [6] Kim, J. Roh, S.-Y. Dong, and S.-Y. Lee, “Hierarchical committee of deep convolutional neural networks for robust facial expression recognition,” Journal on Multimodal User Interfaces, pp. 1–17, 2016.

    [7] Du, Y. Tao, and A. M. Martinez, “Compound facial expressions of emotion,” Proceedings of the National Academy of Sciences, vol. 111, no. 15, pp. E1454–E1462, 2014.

    [8] Krishna, V. K. Deepak, K. Manikantan, and S. Ramachandran, “Face recognition using transform domain feature extraction and PSO based feature selection,” Appl. Soft Comput., vol. 22, pp. 141–161, Sep. 2014.

    [9] Siddiqi, R. Ali, A. M. Khan, Y.-T. Park, and S. Lee, “Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields,” IEEE Trans. Image Process., vol. 24, no. 4, pp. 1386–1398, Apr. 2015.

    [10] Senechal et al., “Facial action recognition combining heterogeneous features via multikernel learning,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 42, no. 4, pp. 993–1005, Aug. 2012.

    [11] Zhong, Q. Liu, P. Yang, J. Huang, and D. N. Metaxas, “Learning multi-scale active facial patches for expression analysis,” IEEE Trans. Cybern., vol. 45, no. 8, pp. 1499–1510, Aug. 2015

    [12] Sanchez-Mendoza, D. Masip, and A. Lapedriza, “Emotion recognition from mid-level features,” Pattern Recognition Letters, vol. 67, pp. 66–74, 2015.

    [13 ]Sathya Preiya V, Kumar VDA. Deep Learning-Based Classification and Feature Extraction for Predicting Pathogenesis of Foot Ulcers in Patients with Diabetes. Diagnostics. 2023; 13(12):1983. https://doi.org/10.3390/diagnostics13121983.

    [14]Balakrishnan C, Ambeth Kumar VD. IoT-Enabled Classification of Echocardiogram Images for Cardiovascular Disease Risk Prediction with Pre-Trained Recurrent Convolutional Neural Networks. Diagnostics (Basel). 2023 Feb 18;13(4):775. doi: 10.3390/diagnostics13040775. PMID: 36832263; PMCID: PMC9955174.

    [15] Yu et al., “Is interactional dissynchrony a clue to deception? Insights from automated analysis of nonverbal visual cues,” IEEE Trans. Cybern., vol. 45, no. 3, pp. 492–506, Mar. 2015.

    [16] Meng, N. Bianchi-Berthouze, Y. Deng, J. Cheng, and J. P. Cosmas, "Time-delay neural network for continuous emotional dimension prediction from facial expression sequences," IEEE Transactions on cybernetics, vol. 46, no. 4, pp. 916–929, 2016.

    [17] Kim, H. Lee, and E. M. Provost, “Deep learning for robust feature generation in audiovisual emotion recognition,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2013.

    [18] Hemamalini, Selvamani, and Visvam Devadoss Ambeth Kumar. 2022. "Outlier Based Skimpy Regularization Fuzzy Clustering Algorithm for Diabetic Retinopathy Image Segmentation" Symmetry 14, no. 12: 2512. https://doi.org/10.3390/sym14122512.

    [19] Kumar, V.D.A., Sharmila, S., Kumar, A. et al. A novel solution for finding postpartum haemorrhage using fuzzy neural techniques. Neural Comput & Applic 35, 23683–23696 (2023). https://doi.org/10.1007/s00521-020-05683-z

    [20] V. D. A. Kumar, M. Raghuraman, A. Kumar, M. Rashid, S. Hakak and M. P. K. Reddy, "Green-Tech CAV: Next Generation Computing for Traffic Sign and Obstacle Detection in Connected and Autonomous Vehicles," in IEEE Transactions on Green Communications and Networking, vol. 6, no. 3, pp. 1307-1315, Sept. 2022, doi: 10.1109/TGCN.2022.3162698.

    [21] Preeti Singh, Khyati Chaudhary, Gopal Chaudhary, Manju Khari, Bharat Rawal, A Machine Learning Approach to Detecting Deepfake Videos: An Investigation of Feature Extraction Techniques, Journal of Journal of Cybersecurity and Information Management, Vol. 9 , No. 2 , (2022) : 42-50 (Doi   :  https://doi.org/10.54216/JCIM.090204)

    [22] Zhou, F. Fei, G. Zhang, J. D. Mai, Y. Liu, J. Y. Liou, and W. J. Li, “2d human gesture tracking and recognition by the fusion of MEMS inertial and vision sensors,” IEEE Sensors J., vol. 14, no. 4, pp. 1160–1170, 2014.

    [23] Poularakis and I. Katsavounidis, “Low-complexity hand gesture recognition system for continuous streams of digits and letters,” IEEE T CYBERNETICS, vol. 46, no. 9, pp. 2094–2108, 2016

    [24] Fan, C. Ma, Z. Gu, Q. Lv, J. Chen, D. Ye, J. Huangfu, Y. Sun, C. Li, and L. Ran, “Wireless hand gesture recognition based on continuous wave Doppler radar sensors,” IEEE Micro Theory, vol. 64, no. 11, pp. 4012–4020, 2016.

    [25] Zhang, Z. Tian, and M. Zhou, “Latern: Dynamic continuous hand gesture recognition using fmcw radar sensor,” IEEE Sensors J, vol. 18, no. 8, pp. 3278–3289, 2018.

    [26] Galka, M. Master, M. Zaborski, and K. Barczewska, “Inertial motion sensing glove for sign language gesture acquisition and recognition,” IEEE Sensors J., vol. 16, no. 16, pp. 6310–6316, 2016.

    [27] Gupta, H. S. Chudgar, S. Mukherjee, T. Dutta, and K. Sharma, “A continuous hand gestures recognition technique for human-machine interaction using accelerometer and gyroscope sensors,” IEEE Sensors J., vol. 16, no. 16, pp. 6425–6432, 2016.

    [28] Wu, K. Chen, and C. Fu, “Natural gesture modelling and recognition approach based on joint movements and arm orientations,” IEEE Sensors J., vol. 16, no. 21, pp. 7753–7761, 2016.

    [29] Zhou, F. Fei, G. Zhang, J. D. Mai, Y. Liu, J. Y. Liou, and W. J. Li, “2d human gesture tracking and recognition by the fusion of MEMS inertial and vision sensors,” IEEE Sensors J., vol. 14, no. 4, pp. 1160–1170, 2014.

    [30] Aymen Hussein, S. Ahmed, Shorook K. Abed, Noor Thamer, Enhancing IoT-Based Intelligent Video Surveillance through Multi-Sensor Fusion and Deep Reinforcement Learning, Journal of Fusion: Practice and Applications, Vol. 11 , No. 2 , (2023) : 21-34 (Doi   :  https://doi.org/10.54216/FPA.110202)

    [31] P. Sherubha, P Amudhavalli, SP Sasirekha, “Clone attack detection using random forest and multi-objective cuckoo search classification”, International Conference on Communication and Signal Processing (ICCSP), pp. 0450-0454, 2019.

    [32] S. Dinesh, K. Maheswari, B. Arthi, P. Sherubha, A. Vijay, S. Sridhar, T. Rajendran, and Yosef Asrat Waji, “Investigations on Brain Tumor Classification Using Hybrid Machine Learning Algorithms”, Hindawi Journal of Healthcare Engineering, Volume 2022.

     

     

    Cite This Article As :
    Naga, P.. , Kumar, Selva. Hybridization of Deep Sequential Network for Emotion Recognition Using Unconstraint Video Analysis. Journal of Cybersecurity and Information Management, vol. , no. , 2024, pp. 109-123. DOI: https://doi.org/10.54216/JCIM.130209
    Naga, P. Kumar, S. (2024). Hybridization of Deep Sequential Network for Emotion Recognition Using Unconstraint Video Analysis. Journal of Cybersecurity and Information Management, (), 109-123. DOI: https://doi.org/10.54216/JCIM.130209
    Naga, P.. Kumar, Selva. Hybridization of Deep Sequential Network for Emotion Recognition Using Unconstraint Video Analysis. Journal of Cybersecurity and Information Management , no. (2024): 109-123. DOI: https://doi.org/10.54216/JCIM.130209
    Naga, P. , Kumar, S. (2024) . Hybridization of Deep Sequential Network for Emotion Recognition Using Unconstraint Video Analysis. Journal of Cybersecurity and Information Management , () , 109-123 . DOI: https://doi.org/10.54216/JCIM.130209
    Naga P. , Kumar S. [2024]. Hybridization of Deep Sequential Network for Emotion Recognition Using Unconstraint Video Analysis. Journal of Cybersecurity and Information Management. (): 109-123. DOI: https://doi.org/10.54216/JCIM.130209
    Naga, P. Kumar, S. "Hybridization of Deep Sequential Network for Emotion Recognition Using Unconstraint Video Analysis," Journal of Cybersecurity and Information Management, vol. , no. , pp. 109-123, 2024. DOI: https://doi.org/10.54216/JCIM.130209