Fusion: Practice and Applications

Journal DOI

https://doi.org/10.54216/FPA

Submit Your Paper

2692-4048ISSN (Online) 2770-0070ISSN (Print)

Volume 17 , Issue 1 , PP: 146-158, 2025 | Cite this article as | XML | Html | PDF | Full Length Article

Wielding Neural Networks to Interpret Facial Emotions in Photographs with Fragmentary Occlusions

K. Anji Reddy 1 * , K. Sivarama Krishna 2 * , Bhanu Prakash Battula 3 , Bajjuri Usha Rani 4 , P. V. V. S. Srinivas 5

  • 1 Senior Assistant Professor, Department of Computer Applications, V.R.Siddhartha Engineering College, Vijayawada, India - (anjireddy5558@gmail.com)
  • 2 Associate Professor, Dept. of CSE, Andhra Loyola Institute of Engineering and Technology, Vijayawada, India - (sivaramkosuru@gmail.com)
  • 3 Professor & Head,Department of CSD,KKR & KSR Institute of Technology and Sciences, Guntur, India - (Prakashbattula33@gmail.com)
  • 4 Sr.Assistant Professor,Lakireddy Bali Reddy College of Engineering (A), Mylavaram, India - (bajjuri.usharani2022@gmail.com)
  • 5 Associate Professor, Dept. of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram Guntur, Andhra Pradesh, India - (cnu.pvv@kluniversity.in)
  • Doi: https://doi.org/10.54216/FPA.170111

    Received: November 25, 2023 Revised: March 17, 2024 Accepted: July 14, 2024
    Abstract

    For many years, scientists have studied the way people express their emotions through body language and facial expressions. However, it is extremely difficult to accurately interpret the emotions of a person from just a single image. Interpreting facial emotions in photographs is a complex task. It is challenging to accurately detect facial emotions with the help of neural networks when the face is occluded with fragmentary blocks. With the advent of technology, emotion detection has become more accurate and reliable. It is now possible to use facial expression recognition in images to detect emotions such as happiness, sadness, anger, fear, surprise, and more. This research discusses the effectiveness of using neural networks to identify facial emotions in photographs with occlusions present. The datasets like Fer2013 dataset, CREMA-D and RAVDESS were used to train the model and the datasets were altered by implanting occlusions randomly in the images. The altered datasets were also used to evaluate the model. The challenges and opportunities that arise when neural networks are used in this context are explored. Additionally, insight is also provided into the best approach to accomplish the task.

    Keywords :

    Neural Networks, Deep Learning , Occlusions , Emotion Interpretation , Human-Computer Interaction

    References

    [1] Ekman, P.; Friesen, W.V.; Ellsworth, P. Emotion in the Human Face: Guidelines for Research and An Integration of Findings; Elsevier: Amsterdam, The Netherlands, 2013; Volume 11.

    [2] Dalgleish, T.; Power, M. Handbook of Cognition and Emotion; John Wiley & Sons: Hoboken, NJ, USA, 2000.

    [3] Hemamalini, Selvamani, and Visvam Devadoss Ambeth Kumar. (2022). Outlier Based Skimpy Regularization Fuzzy Clustering Algorithm for Diabetic Retinopathy Image Segmentation. Symmetry,  14(12),  2512

    [4] Mehendale, N. Facial Emotion Recognition Using Convolutional Neural Networks (FERC); Springer: Berlin/Heidelberg, Germany, 2021; Volume 2, pp. 1–8. 18.

    [5] Akhand, M.; Roy, S.; Siddique, N.; Kamal, M.A.S.; Shimamura, T. Facial emotion recognition using transfer learning in the deep CNN. Electronics 2021, 10, 1036. [CrossRef]

    [6] Sathya Preiya, V., and V. D. Ambeth Kumar. (2023). Deep Learning-Based Classification and Feature Extraction for Predicting Pathogenesis of Foot Ulcers in Patients with Diabetes. Diagnostics 13(12), 1983.

     [7] Soleymani, M.; Asghari-Esfeden, S.; Fu, Y.; Pantic, M. Analysis of EEG signals and facial expressions for continuous emotion detection. IEEE Trans. Affect. Comput. 2015, 7, 17–28. [CrossRef]

    [8] H. K. Ekenel and R. Stiefelhagen. Why is facial occlusion a challenging problem? In Proc. International Conference on Advances in Biometrics (ICB), 2009.

    [9] Kumar, V.D.A., Sharmila, S., Kumar, A. et al.  (2023). A novel solution for finding postpartum haemorrhage using fuzzy neural techniques. Neural Comput & Applic. 35(33), 23683–23696

    [10] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71–86, 1991.

    [11] Alper Ayvaci, Michalis Raptis, and Stefano Soatto. 2012. Sparse occlusion detection with optical flow. International journal of computer vision 97, 3 (2012), 322–338

    [12] Georgescu, M., Ionescu, R.T.: Teacher-student training and triplet loss for facial expression recognition under occlusion. ArXiv, abs/2008.01003(2020)

    [13] Cheng, Y., et al.: A deep structure for facial expression recognition under partial occlusion. In: Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 211–214 (2014)

    [14] Tősér, Z., Jeni, L.A., Lőrincz, A., Cohn, J.F.: Deep learning for facial action unit detection under large head poses. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 359–371. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_29

    [15] Ashwin, T.; Jose, J.; Raghu, G.; Reddy, G.R.M. An e-learning system with multifacial emotion recognition using supervised machine learning. In Proceedings of the 2015 IEEE Seventh International Conference On Technology for Education (T4E), Warangal, India, 10–13 December 2015; pp. 23–26.

    [16] Ghimire, D.; Lee, J. Geometric feature-based facial expression recognition in image sequences using multi-class adaboost and support vector machines. Sensors 2013, 13, 7714–7734.

    [17] Bost, R.; Popa, R.A.; Tu, S.; Goldwasser, S. Machine learning classification over encrypted data. Cryptol. Eprint Arch. 2014. Available online: https://www.ndss-symposium.org/ndss2015/ndss-2015-programme/machine-learning-classification-overencrypted-data/

    [18] Ambeth Kumar, V.D. Ramakrishnan,M. (2013). Temple and Maternity Ward Security using FPRS. Journal of  Electrical Engineering & Technology, 8(3), 633-637.

    [19] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In ACL, 2019.

    [20] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Proc. NeurIPS, 2020.

    [21] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Proc. ICML, 2020a.

    [22] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proc. CVPR, 2020.

    [23] Jean-Bastien Grill, Florian Strub, Florent Altche, Corentin Tallec, Pierre Richemond, Elena ´ Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Proc. NeurIPS, 2020.

    [24] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In Proc. NeurIPS, 2020.

    [25] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked ´ autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377, 2021.

    [26] Chitra, P., Indumathi, A., Rajasekaran, B., Babu, M.M. (2022). Emotion Recognition Using Speech Based Tess and Crema Algorithm. In: Suma, V., Baig, Z., Kolandapalayam Shanmugam, S., Lorenz, P. (eds) Inventive Systems and Control. Lecture Notes in Networks and Systems, vol 436. Springer, Singapore. https://doi.org/10.1007/978-981-19-1012-8_53.

    [27] Zielonka, Marta & Piastowski, Artur & Czyżewski, Andrzej & Nadachowski, Paweł & Operlejn, Maksymilian & Kaczor, Kamil. (2022). Recognition of Emotions in Speech Using Convolutional Neural Networks on Different Datasets. Electronics. 11. 3831. 10.3390/electronics11223831.

    [28] Zheng, W.; Yu, J.; Zou, Y. An experimental study of speech emotion recognition based on deep convolutional neural networks. In Proceedings of the 2015 International Conference on IEEE Affective Computing and Intelligent Interaction (ACII), Xi’an, China, 21–24 September 2015; pp. 827–831.

    [29] P. Sherubha, P Amudhavalli, SP Sasirekha, “Clone attack detection using random forest and multi-objective cuckoo search classification”, International Conference on Communication and Signal Processing (ICCSP), pp. 0450-0454, 2019.

    [30] S. Dinesh, K. Maheswari, B. Arthi, P. Sherubha, A. Vijay, S. Sridhar, T. Rajendran, and Yosef Asrat Waji, “Investigations on Brain Tumor Classification Using Hybrid Machine Learning Algorithms”, Hindawi Journal of Healthcare Engineering, Volume 2022.

    [31] Sherubha, “Graph Based Event Measurement for Analyzing Distributed Anomalies in Sensor Networks”, Sådhanå(Springer), 45:212, https://doi.org/10.1007/s12046-020-01451-w

    [32] Piyush K. Pareek, Pixel Level Image Fusion in Moving objection Detection and Tracking with Machine Learning “,Fusion: Practice and Applications, Volume 2 , Issue 1 , PP: 42-60, 2020

    [33] Shivam Grover, Kshitij Sidana, Vanita Jain, “Egocentric Performance Capture: A Review”, Fusion: Practice and Applications, Volume 2, Issue 2 , PP: 64-73, 2020.

    [34] Abdel Nasser H. Zaied, Mahmoud Ismail and Salwa El- Sayed, A Survey on Meta-heuristic Algorithms for Global Optimization Problems, Journal of Intelligent Systems and Internet of Things, Volume 1 , Issue 1 , PP: 48-60, 2020

    [35] Mahmoud H.Alnamoly, Ahmed M. Alzohairy, Ibrahim M. El-Henawy, “A survey on gel images analysis software tools, Journal of Intelligent Systems and Internet of Things, Volume 1 , Issue 1 , PP: 40-47, 2021.

     

    Cite This Article As :
    Anji, K.. , Sivarama, K.. , Prakash, Bhanu. , Usha, Bajjuri. , V., P.. Wielding Neural Networks to Interpret Facial Emotions in Photographs with Fragmentary Occlusions. Fusion: Practice and Applications, vol. , no. , 2025, pp. 146-158. DOI: https://doi.org/10.54216/FPA.170111
    Anji, K. Sivarama, K. Prakash, B. Usha, B. V., P. (2025). Wielding Neural Networks to Interpret Facial Emotions in Photographs with Fragmentary Occlusions. Fusion: Practice and Applications, (), 146-158. DOI: https://doi.org/10.54216/FPA.170111
    Anji, K.. Sivarama, K.. Prakash, Bhanu. Usha, Bajjuri. V., P.. Wielding Neural Networks to Interpret Facial Emotions in Photographs with Fragmentary Occlusions. Fusion: Practice and Applications , no. (2025): 146-158. DOI: https://doi.org/10.54216/FPA.170111
    Anji, K. , Sivarama, K. , Prakash, B. , Usha, B. , V., P. (2025) . Wielding Neural Networks to Interpret Facial Emotions in Photographs with Fragmentary Occlusions. Fusion: Practice and Applications , () , 146-158 . DOI: https://doi.org/10.54216/FPA.170111
    Anji K. , Sivarama K. , Prakash B. , Usha B. , V. P. [2025]. Wielding Neural Networks to Interpret Facial Emotions in Photographs with Fragmentary Occlusions. Fusion: Practice and Applications. (): 146-158. DOI: https://doi.org/10.54216/FPA.170111
    Anji, K. Sivarama, K. Prakash, B. Usha, B. V., P. "Wielding Neural Networks to Interpret Facial Emotions in Photographs with Fragmentary Occlusions," Fusion: Practice and Applications, vol. , no. , pp. 146-158, 2025. DOI: https://doi.org/10.54216/FPA.170111