Volume 17 , Issue 2 , PP: 404-414, 2025 | Cite this article as | XML | Html | PDF | Full Length Article
N. B. Mahesh Kumar 1 * , Subbulakshmi M. 2 , T. Baranidharan 3 , Mohana Sundharam M. 4 , Geetha M. P. 5
Doi: https://doi.org/10.54216/JISIoT.170226
Traditional recommendation systems primarily rely on user behavior, ratings, and content-based preferences to suggest products or services. However, they often overlook the nuanced emotional context that significantly influences consumer decision-making. This paper proposes a Sentiment-Enhanced Recommendation System (SERS) that integrates sentiment analysis with collaborative and content-based filtering to better capture the affective dimensions of user preferences. By analyzing user-generated content such as reviews, comments, and social media posts using deep learning-based sentiment classifiers, the proposed model quantifies emotional polarity and intensity. These sentiment signals are then incorporated into the recommendation pipeline using hybrid matrix factorization and attention mechanisms, enabling dynamic adaptation to users' emotional states. Experimental evaluations conducted on datasets from Amazon and Yelp demonstrate significant improvements in precision, recall, and user satisfaction scores compared to traditional models. The findings highlight the critical role of emotions in shaping consumer behavior and underscore the importance of affect-aware personalization in modern recommendation systems.
Sentiment-Enhanced Recommendation , Consumer Behavior , Emotional Intelligence in AI , Deep Learning
[1] Vaswani et al., "Attention is All You Need," in Adv. Neural Inf. Process. Syst., vol. 30, 2017, pp. 5998–6008.
[2] Radford et al., "Language Models are Few-Shot Learners," arXiv preprint arXiv: 2005.14165, 2020.
[3] T. Chen, Z. Tang, and H. Xu, "Codex: Evaluating the Capabilities of GPT-3 in Code Generation," ACM Comput. Surv., vol. 55, no. 4, pp. 1–32, 2023.
[4] M. Allamanis, E. T. Barr, P. Devanbu, and C. Sutton, "A Survey of Machine Learning for Big Code and Naturalness," ACM Comput. Surv., vol. 51, no. 4, pp. 1–37, 2018.
[5] J. Austin et al., "Program Synthesis with Large Language Models," arXiv preprint arXiv: 2108.07732, 2021.
[6] S. Jain and D. Hakkani-Tür, "Analyzing and Mitigating the Impact of Target Leakage in Code Generation Tasks," in Proc. Conf. Empir. Methods Nat. Lang. Process. (EMNLP), 2021, pp. 1521–1533.
[7] H. Svyatkovskiy, S. Sundaresan, Y. Fu, and N. Sundaresan, "Intellicode Compose: Code Generation Using Transformer," arXiv preprint arXiv: 2005.08025, 2020.
[8] Y. Lu et al., "CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence," in Proc. Conf. Empir. Methods Nat. Lang. Process. (EMNLP), 2021.
[9] S. Chen, Y. Liu, and X. Wang, "Evaluating and Improving the Robustness of Code Generation Models," in Proc. ACM SIGSOFT Int. Symp. Found. Softw. Eng. (FSE), 2022, pp. 245–256.
[10] S. Ahmad, A. Chakraborty, and D. R. Mani, "A Transformer-Based Model for Fixing Bugs in Code," in Proc. AAAI Conf. Artif. Intell., vol. 35, no. 15, 2021, pp. 13028–13036.
[11] F. Zha et al., "Towards Accurate Code Completion with Graph-Based Deep Learning," IEEE Trans. Softw. Eng., vol. 49, no. 1, pp. 78–91, 2023.
[12] Sobania, M. Hill, P. Rieping, and S. Kowalewski, "An Empirical Study of GitHub Copilot's Code Suggestions," in Proc. ACM Joint Eur. Softw. Eng. Conf. Symp. Found. Softw. Eng. (ESEC/FSE), 2022, pp. 228–239.
[13] X. Chen et al., "Evaluating the Use of Code Language Models on Domain-Specific Languages," arXiv preprint arXiv: 2107.07207, 2021.
[14] J. McAuley, J. Leskovec, and D. Jurafsky, "Learning attitudes and attributes from multi-aspect reviews," in Proc. IEEE 12th Int. Conf. Data Mining, 2012, pp. 1020-1025.
[15] H. Zhu et al., "Automatic Unit Test Generation with Pre-trained Language Models," ACM Trans. Softw. Eng. Methodol., vol. 32, no. 1, pp. 1–27, 2023.
[16] S. Liu, D. Rajan, and M. White, "Explainable AI for Code: A Survey," J. Syst. Softw., vol. 198, p. 111478, 2023.
[17] P. Zhu, Q. Shi, and D. Wang, "Secure Code Generation Using Adversarial Training," in Proc. IEEE Symp. Secur. Privacy (SP), 2023, pp. 1231–1244.
[18] M. Terrel and J. Z. Zico, "Legal Implications of AI-Generated Code: Licensing, Ownership, and Accountability," Comput. Law Secur. Rev., vol. 45, p. 105693, 2022.
[19] N. Hosseini, B. Vasilescu, and K. Nagel, "LLM-based Tutors for Teaching Programming: Opportunities and Challenges," in Proc. ACM Conf. Learn. Scale (L@S), 2023, pp. 127–139.
[20] T. Panagiotakopoulos et al., "Thyroid cancer and pregnancy: a systematic ten-year-review," Gland Surg., vol. 13, no. 6, pp. 1097–1108, 2024.
[21] M. P. Geetha and D. K. Renuka, "Deep learning architecture towards consumer buying behaviour prediction using multitask learning paradigm," J. Intell. Fuzzy Syst., Preprint, pp. 1–17, 2024.
[22] U. Nilabar Nisha, A. Manikandan, C. Venkataramanan, and R. Dhanapal, "A score based link delay aware routing protocol to improve energy optimization in wireless sensor network," J. Eng. Res., vol. 11, pp. 404–413, 2023.
[23] R. Reka, A. Manikandan, C. Venkataramanan, and R. Madanachitran, "An energy efficient clustering with enhanced chicken swarm optimization algorithm with adaptive position routing protocol in mobile adhoc network," Telecommun. Syst., 2023, doi: 10.1007/s11235-023-01041-1.