Volume 18 , Issue 1 , PP: 169-184, 2026 | Cite this article as | XML | Html | PDF | Full Length Article
Faisal Binsar 1 * , Sasmoko 2
Doi: https://doi.org/10.54216/JISIoT.180112
The use of Artificial Intelligence (AI) in medical diagnosis has rapidly evolved with the adoption of large language models and explainability techniques. This study investigates the intersection of Chain-of-Thought (CoT) reasoning and Explainable AI (XAI) in the development of trustworthy diagnostic systems, particularly within Internet of Things (IoT)-enabled healthcare environments. A systematic review of 106 Scopus-indexed publications (2016–2025) was conducted, supported by topic modeling (LDA) and keyword co-occurrence network analysis to identify dominant research themes and gaps. Findings reveal that while CoT and XAI are actively studied, their integration within real-time, distributed, and resource-constrained medical systems remains limited. Most research emphasizes either performance or interpretability in isolation, with minimal efforts to embed step-wise reasoning into deployable clinical AI pipelines. Moreover, few studies address how CoT can function effectively in edge computing or federated learning scenarios common to IoT infrastructures. To address this gap, we propose a multi-layered conceptual framework that integrates CoT reasoning, machine learning predictors, XAI methods, and IoT deployment models. This framework reflects the shift toward user-centric, transparent, and adaptive AI solutions in smart healthcare. It provides a structured path from multimodal data ingestion to clinically interpretable and real-time decision support. This study contributes a novel perspective on reasoning-driven explainability and offers design guidance for future development of interpretable, scalable, and deployable AI systems in medical applications.
Explainable AI , CoT Reasoning , Natural Language Processing , Smart Medical Systems , Internet of Medical Things
[1] A. Smith and B. Johnson, “A Comprehensive Study on the Impact of AI in Financial Services,” J. Financial Technol., vol. 12, no. 3, pp. 45–60, 2022.
[2] E. Alsentzer et al., “Publicly Available Clinical BERT Embeddings,” in Proceedings of the 2nd Clinical Natural Language Processing Workshop, 2019, pp. 72–78.
[3] J. Lee et al., “BioBERT: a pre-trained biomedical language representation model for biomedical text mining,” Bioinformatics, vol. 36, no. 4, pp. 1234–1240, Feb. 2020.
[4] C. Brown and D. White, “Exploring the Role of Machine Learning in Environmental Sustainability,” Int. J. Sustainable Dev., vol. 9, no. 4, pp. 201–215, 2023.
[5] M. Ghassemi, L. Oakden-Rayner, and A. L. Beam, “The false hope of current approaches to explainable artificial intelligence in health care,” Lancet Digit. Heal., vol. 3, no. 11, pp. e745–e750, Nov. 2021.
[6] B. Heinrichs and S. B. Eickhoff, “Your evidence? Machine learning algorithms for medical diagnosis and prediction,” Hum. Brain Mapp., vol. 41, no. 6, pp. 1435–1444, 2020.
[7] S. U. Hassan, S. J. Abdulkadir, M. S. M. Zahid, and S. M. Al-Selwi, “Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review,” Comput. Biol. Med., vol. 185, p. 109569, 2025.
[8] M. R. Santos, A. Guedes, and I. Sanchez-Gendriz, “SHapley Additive exPlanations (SHAP) for Efficient Feature Selection in Rolling Bearing Fault Diagnosis,” Machine Learning and Knowledge Extraction, vol. 6, no. 1, pp. 316–341, 2024.
[9] J. Wei et al., “Chain-of-thought prompting elicits reasoning in large language models,” in Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022.
[10] R. Hamon, H. Junklewitz, I. Sanchez, G. Malgieri, and P. De Hert, “Bridging the Gap between AI and Explainability in the GDPR: Towards Trustworthiness-by-Design in Automated Decision-Making,” IEEE Comput. Intell. Mag., vol. 17, no. 1, pp. 72–85, 2022.
[11] O. Oyebode, J. Fowles, D. Steeves, and R. Orji, “Machine Learning Techniques in Adaptive and Personalized Systems for Health and Wellness,” Int. J. Hum. Comput. Interact, vol. 39, no. 9, pp. 1938–1962, 2023.
[12] Z. Li, Z. Cao, P. Li, Y. Zhong, and S. Li, “Multi-Hop Question Generation with Knowledge Graph-Enhanced Language Model,” Applied Sciences, vol. 13, no. 9, 2023.
[13] W. Jiageng et al., “Clinical Text Datasets for Medical Artificial Intelligence and Large Language Models — A Systematic Review,” NEJM AI, vol. 1, no. 6, May 2024.
[14] M. M. Ahsan, S. A. Luna, and Z. Siddique, “Machine-Learning-Based Disease Diagnosis: A Comprehensive Review,” Healthc. (Basel, Switzerland), vol. 10, no. 3, Mar. 2022.
[15] F. J. Boge, P. Grünke, and R. Hillerbrand, “Minds and Machines Special Issue: Machine Learning: Prediction without Explanation?” Minds Mach., vol. 32, no. 1, pp. 1–9, 2022.
[16] N. B. Kumarakulasinghe, T. Blomberg, J. Liu, A. S. Leao, and P. Papapetrou, “Evaluating Local Interpretable Model-Agnostic Explanations on Clinical Machine Learning Classification Models,” in 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), 2020, pp. 7–12.
[17] X. Chen, “The Advance of Deep Learning and Attention Mechanism,” in 2022 International Conference on Electronics and Devices, Computational Science (ICEDCS), 2022, pp. 318–321.
[18] U. Peters, “Explainable AI lacks regulative reasons: why AI and human decision-making are not equally opaque,” AI Ethics, vol. 3, no. 3, pp. 963–974, 2023.
[19] T. A. Koleck, C. Dreisbach, P. E. Bourne, and S. Bakken, “Natural language processing of symptoms documented in free-text narratives of electronic health records: a systematic review,” J. Am. Med. Inform. Assoc., vol. 26, no. 4, pp. 364–379, Apr. 2019.
[20] S. Kruschel, N. Hambauer, S. Weinzierl, S. Zilker, M. Kraus, and P. Zschech, “Challenging the Performance-Interpretability Trade-Off: An Evaluation of Interpretable Machine Learning Models,” Bus. Inf. Syst. Eng., 2025.
[21] C. Albon, Machine Learning with Python Cookbook: Practical Solutions from Preprocessing to Deep Learning, First Edit. Sebastopol, CA: O’Reilly, 2018.
[22] T. Dillan and D. H. Fudholi, “LDAViewer: An Automatic Language-Agnostic System for Discovering State-of-the-Art Topics in Research Using Topic Modeling, Bidirectional Encoder Representations From Transformers, and Entity Linking,” IEEE Access, vol. 11, no. April, pp. 59142–59163, 2023.
[23] F. Binsar, T. N. Mursitama, M. Hamsal, and R. K. Rahim, “Determinants of Digital Adoption Capability for Service Performance in Indonesian Hospitals: A Conceptual Model,” J. Syst. Manag. Sci., vol. 14, no. 2, pp. 188–213, 2024.
[24] T. A. J. Schoonderwoerd, W. Jorritsma, M. A. Neerincx, and K. van den Bosch, “Human-centered XAI: Developing design patterns for explanations of clinical decision support systems,” Int. J. Hum. Comput. Stud., vol. 154, 2021.
[25] I. A. Khan et al., “XSRU-IoMT: Explainable simple recurrent units for threat detection in Internet of Medical Things networks,” Futur. Gener. Comput. Syst., vol. 127, pp. 181–193, 2022.
[26] D. Gaspar, P. Silva, and C. Silva, “Explainable AI for Intrusion Detection Systems: LIME and SHAP Applicability on Multi-Layer Perceptron,” IEEE Access, vol. 12, pp. 30164–30175, 2024.
[27] I. Vaccari, V. Orani, A. Paglialonga, E. Cambiaso, and M. Mongelli, “A Generative Adversarial Network (GAN) Technique for Internet of Medical Things Data,” Sensors, vol. 21, no. 11, 2021.
[28] N. Shaikh, K. Kasat, R. K. Godi, V. R. Krishna, D. K. Chauhan, and J. Kharade, “Novel IoT framework for event processing in healthcare applications,” Meas. Sensors, vol. 27, p. 100733, 2023.
[29] B. L. Y. Agbley et al., “Federated Fusion of Magnified Histopathological Images for Breast Tumor Classification in the Internet of Medical Things,” IEEE J. Biomed. Heal. Informatics, vol. 28, no. 6, pp. 3389–3400, 2024.
[30] E. Baccour, A. Erbad, A. Mohamed, M. Hamdi, and M. Guizani, “Reinforcement learning-based dynamic pruning for distributed inference via explainable AI in healthcare IoT systems,” Futur. Gener. Comput. Syst., vol. 155, pp. 1–17, 2024.
[31] Ş. Kolozali, S. L. White, S. Norris, M. Fasli, and A. van Heerden, “Explainable Early Prediction of Gestational Diabetes Biomarkers by Combining Medical Background and Wearable Devices: A Pilot Study With a Cohort Group in South Africa,” IEEE J. Biomed. Heal. Informatics, vol. 28, no. 4, pp. 1860–1871, 2024.
[32] F. Ullah, J. Moon, H. Naeem, and S. Jabbar, “Explainable artificial intelligence approach in combating real-time surveillance of COVID-19 pandemic from CT scan and X-ray images using ensemble model,” J. Supercomput., vol. 78, no. 17, pp. 19246–19271, 2022.
[33] A. A. Nassani, A. Javed, J. Rosak-Szyrocka, L. Pilar, Z. Yousaf, and M. Haffar, “Major Determinants of Innovation Performance in the Context of Healthcare Sector,” Int. J. Environ. Res. Public Health, vol. 20, no. 6, 2023.
[34] S. S. Joudar et al., “Artificial intelligence-based approaches for improving the diagnosis, triage, and prioritization of autism spectrum disorder: a systematic review of current trends and open issues,” Artif. Intell. Rev., vol. 56, no. 1, pp. 53–117, 2023.
[35] P. Kalpana, R. Kumar, and S. Gupta, “Edge-Deployable Sparse Gated Recurrent Networks for IoT-Based Clinical Gait Analysis,” Proc. IEEE Int. Conf. Pervasive Comput. Commun. (PerCom), Kyoto, Japan, 2024, pp. 1-10.
[36] İ. Kök, “MetaXAI: Metahuman-assisted audio and visual explainability framework for Internet of Medical Things,” Biomed. Signal Process. Control, vol. 100, p. 107034, 2025.
[37] G. Prabaharan, S. M. Udhaya Sankar, V. Anusuya, K. Jaya Deepthi, R. Lotus, and R. Sugumar, “Optimized disease prediction in healthcare systems using HDBN and CAEN framework,” MethodsX, vol. 14, p. 103338, 2025.
[38] S. Naouali and O. El Othmani, “Rough Set Theory and Soft Computing Methods for Building Explainable and Interpretable AI/ML Models,” Applied Sciences, vol. 15, no. 9, 2025.
[39] L. R. Non, A. R. Marra, and D. Ince, “Rise of the Machines - Artificial Intelligence in Healthcare Epidemiology,” Curr. Infect. Dis. Rep., vol. 27, no. 1, p. 4, 2025.
[40] V. Holubenko, D. Gaspar, R. Leal, and P. Silva, “Autonomous intrusion detection for IoT: a decentralized and privacy preserving approach,” Int. J. Inf. Secur., vol. 24, no. 1, p. 7, 2025.