Journal of Intelligent Systems and Internet of Things

Journal DOI

https://doi.org/10.54216/JISIoT

Submit Your Paper

2690-6791ISSN (Online) 2769-786XISSN (Print)

Volume 17 , Issue 2 , PP: 88-100, 2025 | Cite this article as | XML | Html | PDF | Full Length Article

Designing Explainable Deep Learning Models for Biomedical Data Analysis and Clinical Prediction Enhancement

Maha Rahrouh 1 , Walid Alayash 2 , Inas salah Mahmoud 3 , Marwa Hussien Moahmed 4 *

  • 1 Business Department, Al Ain University, Al Ain, UAE - (maha.rahrouh@aau.ac.ae)
  • 2 Computer Technology Engineering Department, Engineering Technologies College, Al-Esraa University Baghdad, 1008, Iraq - (walid@esraa.edu.iq)
  • 3 Biomedical Engineering Department, Engineering College, Al-Esraa University Baghdad, 10081, Iraq - (inas.salah@esraa.edu.iq)
  • 4 Computer Technology Engineering Department, Engineering Technologies College, Al-Esraa University Baghdad, 1008, Iraq - (maraw@esraa.edu.iq)
  • Doi: https://doi.org/10.54216/JISIoT.170207

    Received: January 05, 2025 Revised: March 07, 2025 Accepted: May 25, 2025
    Abstract

    Recent advancements in biomedical data analysis have significantly transformed clinical decision-making. However, the inherent complexity and heterogeneity of healthcare data continue to present major challenges. Traditional deep learning models, while powerful, often lack transparency, limiting their adoption in clinical settings due to their "black-box" nature. To address this critical gap, this study introduces a novel Explainable Deep Learning (XDL) framework that integrates high predictive accuracy with interpretability, enabling clinicians to trust and validate AI-driven insights. The proposed framework leverages advanced interpretability techniques—such as Grad-CAM for visual attribution and SHAP for feature importance analysis—to analyze multimodal biomedical data, including clinical imaging, genomic sequencing, and electronic health records. Experimental evaluations across three benchmark datasets demonstrated the model’s strong performance, achieving an accuracy of 91%, sensitivity of 95.4%, specificity of 98.6%, and an AUC of 99%, while maintaining an interpretability score of 92% as rated by domain experts. Compared to non-explainable models, the proposed approach showed a 12.3% increase in interpretability and a 5.8% improvement in accuracy. Importantly, attention map analysis revealed alignment with clinically relevant biomarkers in 93% of cases and uncovered previously overlooked prognostic patterns in 18% of patient cohorts. These findings underscore the model’s potential to enhance diagnostic precision and support more informed clinical decisions. Moreover, the algorithm reduced diagnostic time by 23% due to its provision of actionable insights. The hybrid approach—combining built-in attention mechanisms with external interpretability tools—ensures seamless integration into clinical workflows while supporting compliance with regulatory standards for transparency.

    Keywords :

    Explainable AI (XAI) , Deep Learning in Healthcare , Medical Imaging Interpretation ,   , Genomic Data Analysis ,   , Clinical Decision Support , Interpretability in Neural Networks

    References

    [1]       A. Singh, S. Sengupta, and V. Lakshminarayanan, "Explainable deep learning models in medical image analysis," J. Imaging, vol. 6, no. 6, p. 52, 2020, doi: 10.3390/jimaging6060052.

    [2]       Y. Li, T. Yoshimura, and H. Sugimori, "Rapid right coronary artery extraction from CT images via global–local deep learning method based on GhostNet," Electronics, vol. 14, no. 7, p. 1399, 2025, doi: 10.3390/electronics14071399.

    [3]       C. Liu et al., "Development and validation of an explainable machine learning model for predicting myocardial injury after noncardiac surgery in two centers in China: Retrospective study," JMIR Aging, vol. 7, no. 1, p. e54872, 2024, doi: 10.2196/54872.

    [4]       J. Caterson, A. Lewin, and E. Williamson, "The application of explainable artificial intelligence (XAI) in electronic health record research: A scoping review," Digital Health, vol. 10, p. 20552076241272657, 2024, doi: 10.2196/48320.

    [5]       S. Al-Fahdawi et al., "Fundus-DeepNet: Multi-label deep learning classification system for enhanced detection of multiple ocular diseases through data fusion of fundus images," Inf. Fusion, vol. 102, p. 102059, 2024, doi: 10.1016/j.inffus.2023.102059.

    [6]       S. B. Shaheema and N. B. Muppalaneni, "Explainability-based panoptic brain tumour segmentation using a hybrid PA-NET with GCNN-ResNet50," Biomed. Signal Process. Control, vol. 94, p. 106334, 2024, doi: 10.1016/j.bspc.2023.106334.

    [7]       A. Barredo Arrieta et al., "Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI," Inf. Fusion, vol. 58, pp. 82–115, 2020, doi: 10.1016/j.inffus.2019.12.012.

    [8]       G. Schwalbe and B. Finzel, "A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on methods and concepts," Data Min. Knowl. Discov., vol. 38, no. 5, pp. 3043–3101, 2024, doi: 10.1007/s10618-022-00867-8.

    [9]       K. Balasubramani and U. M. Natarajan, "A fuzzy wavelet neural network (FWNN) and hybrid optimisation machine learning technique for traffic flow prediction," Babylonian J. Mach. Learn., vol. 2024, pp. 121–132, 2024, doi: 10.58496/BJML/2024/012.

    [10]    E. Kina, "TLEABLCNN: Brain and Alzheimer’s disease detection using attention-based explainable deep learning and SMOTE using imbalanced brain MRI," IEEE Access, vol. 2025, pp. 1–1, 2025, doi: 10.1109/ACCESS.2025.1234567.

    [11]    A. Singh, A. R. Mohammed, J. Zelek, and V. Lakshminarayanan, "Interpretation of deep learning using attributions: application to ophthalmic diagnosis," in Applications of Machine Learning 2020, Bellingham, WA, USA: Int. Soc. Opt. Photon. (SPIE), 2020, doi: 10.1117/12.2568631.

    [12]    L. Wang and A. Wong, "COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images," arXiv preprint arXiv:2003.09871, 2020, doi: 10.48550/arXiv.2003.09871.

    [13]    K. Wickstrøm, M. Kampffmeyer, and R. Jenssen, "Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps," Med. Image Anal., vol. 60, p. 101619, 2020, doi: 10.1016/j.media.2019.101619.

    [14]    U. Bamba, D. Pandey, and V. Lakshminarayanan, "Classification of brain lesions from MRI images using a novel neural network," in Multimodal Biomedical Imaging XV, Bellingham, WA, USA: SPIE, Feb. 2020, doi: 10.1117/12.2543960.

    [15]    Y. Hossain et al., "Explainable AI for medical data: current methods, limitations, and future directions," ACM Comput. Surv., vol. 57, no. 6, pp. 1–46, 2025, doi: 10.1145/3575021.

    [16]    C. Biffi et al., "Explainable anatomical shape analysis through deep hierarchical generative models," IEEE Trans. Med. Imaging, vol. 39, no. 6, pp. 2088–2099, 2020, doi: 10.1109/TMI.2020.2964499.

    [17]    J. Pavez and H. Allende, "A hybrid system based on Bayesian networks and deep learning for explainable mental health diagnosis," Appl. Sci., vol. 14, no. 18, p. 8283, 2024, doi: 10.3390/app14188283.

    [18]    A. S. Rashad, M. H. Khafagy, M. Ali, and M. H. Mohamed, "Exploring the VAK model to predict student learning styles based on learning activity," Intell. Syst. Appl., vol. 25, p. 200483, 2025, doi: 10.1016/j.iswa.2025.200483.

    [19]    T. T. H. Wan and H. S. Wan, "Predictive analytics with a transdisciplinary framework in promoting patient-centric care of polychronic conditions: Trends, challenges, and solutions," AI, vol. 4, no. 3, pp. 482–490, 2023, doi: 10.3390/ai4030026.

    [20]    R.-E. Ko et al., "Deep learning-based early warning score for predicting clinical deterioration in general ward cancer patients," Cancers, vol. 15, no. 21, p. 5145, 2023, doi: 10.3390/cancers15215145.

    [21]    M. Cui, P. Li, Z. Bu, M. Xun, and L. Ding, "GPU-optimized implementation for accelerating CSAR imaging," Electronics, vol. 14, no. 10, p. 2073, 2025, doi: 10.3390/electronics14102073.

    [22]    M. Hussien Mohamed, M. H. Khafagy, M. Elkholy, and A. Marzouk, "Innovative machine learning approaches for identifying pre-diabetes in patients," J. Inf. Hiding Multimedia Signal Process., vol. 16, no. 1, pp. 365–378, Mar. 2025.

    [23]    H. Kaur et al., "Interpreting interpretability: Understanding data scientists' use of interpretability tools for machine learning," in Proc. 2020 CHI Conf. Hum. Factors Comput. Syst., New York, NY, USA, Apr. 2020, doi: 10.1145/3313831.3376219.

    [24]    C. Wang et al., "Evaluating diagnostic concordance in primary open-angle glaucoma among academic glaucoma subspecialists," Diagnostics, vol. 14, no. 21, p. 2460, 2024, doi: 10.3390/diagnostics14212460.

    Cite This Article As :
    Rahrouh, Maha. , Alayash, Walid. , salah, Inas. , Hussien, Marwa. Designing Explainable Deep Learning Models for Biomedical Data Analysis and Clinical Prediction Enhancement. Journal of Intelligent Systems and Internet of Things, vol. , no. , 2025, pp. 88-100. DOI: https://doi.org/10.54216/JISIoT.170207
    Rahrouh, M. Alayash, W. salah, I. Hussien, M. (2025). Designing Explainable Deep Learning Models for Biomedical Data Analysis and Clinical Prediction Enhancement. Journal of Intelligent Systems and Internet of Things, (), 88-100. DOI: https://doi.org/10.54216/JISIoT.170207
    Rahrouh, Maha. Alayash, Walid. salah, Inas. Hussien, Marwa. Designing Explainable Deep Learning Models for Biomedical Data Analysis and Clinical Prediction Enhancement. Journal of Intelligent Systems and Internet of Things , no. (2025): 88-100. DOI: https://doi.org/10.54216/JISIoT.170207
    Rahrouh, M. , Alayash, W. , salah, I. , Hussien, M. (2025) . Designing Explainable Deep Learning Models for Biomedical Data Analysis and Clinical Prediction Enhancement. Journal of Intelligent Systems and Internet of Things , () , 88-100 . DOI: https://doi.org/10.54216/JISIoT.170207
    Rahrouh M. , Alayash W. , salah I. , Hussien M. [2025]. Designing Explainable Deep Learning Models for Biomedical Data Analysis and Clinical Prediction Enhancement. Journal of Intelligent Systems and Internet of Things. (): 88-100. DOI: https://doi.org/10.54216/JISIoT.170207
    Rahrouh, M. Alayash, W. salah, I. Hussien, M. "Designing Explainable Deep Learning Models for Biomedical Data Analysis and Clinical Prediction Enhancement," Journal of Intelligent Systems and Internet of Things, vol. , no. , pp. 88-100, 2025. DOI: https://doi.org/10.54216/JISIoT.170207