Volume 17 , Issue 1 , PP: 389-397, 2025 | Cite this article as | XML | Html | PDF | Full Length Article
M. Sivasankar 1 * , K. Murugan 2 , P. Gouthami 3 , G. Balambigai 4 , Kalaivani T. 5
Doi: https://doi.org/10.54216/JISIoT.170127
Social media platforms have become pivotal arenas for the public to express emotions, opinions, and sentiments. While traditional sentiment analysis methods predominantly focus on textual data, they often overlook the rich emotional context embedded in images shared alongside posts. This paper presents a novel framework that integrates Visual Sentiment Analysis (VSA) with Natural Language Processing (NLP) techniques to enhance the understanding of public sentiment in social media content. By leveraging deep learning-based feature extraction from images (using pre-trained CNN models) and combining them with transformer-based text analysis (such as BERT), the proposed multimodal sentiment analysis model captures nuanced emotional expressions more effectively than unimodal approaches. Experiments conducted on benchmark datasets, including Twitter and Instagram posts, demonstrate a significant improvement in sentiment classification accuracy and contextual interpretation. The study highlights the potential of integrated sentiment analysis systems in applications such as brand monitoring, political opinion tracking, and mental health detection.
Visual Sentiment Analysis , Multimodal Sentiment Classification , Social Media Analytics , Natural Language Processing (NLP) , Deep Learning , Convolutional Neural Networks (CNN)
[1] D. Gunning, “Explainable Artificial Intelligence (XAI),” Defense Advanced Research Projects Agency (DARPA), vol. 2, no. 2, pp. 1–36, 2017.
[2] M. Tjoa and C. Guan, “A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 4793–4813, 2021.
[3] M. T. Ribeiro, S. Singh, and C. Guestrin, “Why Should I Trust You?: Explaining the Predictions of Any Classifier,” in Proc. 22nd ACM SIGKDD, 2016, pp. 1135–1144.
[4] S. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” in Proc. NeurIPS, 2017, pp. 4765–4774.
[5] B. Kim, M. Wattenberg, and J. Gilmer, “Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV),” in Proc. ICML, 2018.
[6] R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad, “Intelligible Models for Healthcare: Predicting Pneumonia Risk and Hospital 30-Day Readmission,” in Proc. ACM KDD, 2015.
[7] D. Alvarez-Melis and T. S. Jaakkola, “On the Robustness of Interpretability Methods,” in arXiv preprint arXiv: 1806.08049, 2018.
[8] A. Adadi and M. Berrada, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” IEEE Access, vol. 6, pp. 52138–52160, 2018.
[9] M. Tjoa and C. Guan, “A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 11, pp. 4793–4813, 2021.
[10] A. Barredo Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI,” Information Fusion, vol. 58, pp. 82–115, 2020.
[11] M. T. Ribeiro, S. Singh, and C. Guestrin, “Model-Agnostic Interpretability of Machine Learning,” in Proc. ICML Workshop on Human Interpretability, 2016.
[12] S. M. Lundberg et al., “From Local Explanations to Global Understanding with Explainable AI for Trees,” Nature Machine Intelligence, vol. 2, pp. 252–259, 2020.
[13] K. K. Patel, R. S. Rana, and S. Garg, “An Evaluation of SHAP and LIME Explainability for Text Classification,” Expert Systems with Applications, vol. 200, p. 116931, 2022.
[14] S. Bach et al., “On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation,” PLOS ONE, vol. 10, no. 7, e0130140, 2015.
[15] G. Montavon, W. Samek, and K.-R. Müller, “Methods for Interpreting and Understanding Deep Neural Networks,” Digital Signal Processing, vol. 73, pp. 1–15, 2018.
[16] Z. C. Lipton, “The Mythos of Model Interpretability,” Queue, vol. 16, no. 3, pp. 31–57, 2018.
[17] C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead,” Nature Machine Intelligence, vol. 1, pp. 206–215, 2019.
[18] H. Chen, D. Zhang, and Z. Zhang, “Hybrid Attention Mechanism for Interpretable Deep Learning Models,” in Proc. AAAI, 2021.
[19] R. Caruana, “Explaining Explanations in AI,” AI Magazine, vol. 40, no. 1, pp. 18–19, 2019.
[20] F. Doshi-Velez and B. Kim, “Towards a Rigorous Science of Interpretable Machine Learning,” arXiv preprint arXiv: 1702.08608, 2017.
[21] G. Chandrasekaran, N. Antoanela, G. Andrei, C. Monica, and J. Hemanth, “Visual Sentiment Analysis Using Deep Learning Models with Social Media Data,” Applied Sciences, vol. 12, no. 2, p. 1030, 2022.
[22] N. Ilayaraja, S. Yuvaraj, R. Chowdhury, P. Kumar, P. Yalagi, and E. Glory, “Enhancing Aspect-Based Sentiment Analysis Through Multi-Granularity Information Sharing,” International Journal of Computer Applications, vol. 1, pp. 1–7, 2024.