Journal of Artificial Intelligence and Metaheuristics

Journal DOI

https://doi.org/10.54216/JAIM

Submit Your Paper

2833-5597ISSN (Online)

Tapping into Knowledge: Ontological Data Mining Approach for Detecting Cardiovascular Disease Risk Causes Among Diabetes Patients

Hussein Alkattan , S. K. Towfek , M. Y. Shams

The prevalence of cardiovascular disease (CVD) is a serious public health issue, and it is of particular concern for people with diabetes because of the increased risk of cardiovascular problems that these people experience. In this study, we suggest a novel method of Ontological Data Mining (ODM) for identifying the origins of CVD risk in diabetic patients. We want to improve the readability and precision of prediction models by incorporating domain knowledge and semantic linkages into the data mining process. In this work, we examine a large dataset consisting of 70,000 patient records with 11 attributes, all of which are derived through a thorough clinical history and physical examination. As part of our methodology, we use decision trees, support vector machines (SVMs), and gradient boosting (GB). The distribution patterns of critical variables with respect to CVD outcomes can be better understood through the use of visual representations such as box plots, distributional plots, and pie charts. Finding significant connections and causal relationships between risk factors and CVD outcomes is made possible by the suggested ODM method. Our research has promising implications for bettering the treatment of patients with diabetes, facilitating targeted interventions, and enhancing risk assessment and preventative methods for cardiovascular disease.

Read More

Doi: https://doi.org/10.54216/JAIM.040101

Vol. 4 Issue. 1 PP. 08-15, (2023)

Bridging the Gap: An Explainable Methodology for Customer Churn Prediction in Supply Chain Management

Adel Oubelaid , Abdelhameed Ibrahim , Ahmed M. Elshewey

Customer churn prediction is a critical task for businesses aiming to retain their valuable customers. Nevertheless, the lack of transparency and interpretability in machine learning models hinders their implementation in real-world applications. In this paper, we introduce a novel methodology for customer churn prediction in supply chain management that addresses the need for explainability. Our approach take advantage of XGBoost as the underlying predictive model. We recognize the importance of not only accurately predicting churn but also providing actionable insights into the key factors driving customer attrition. To achieve this, we employ Local Interpretable Model-agnostic Explanations (LIME), a state-of-the-art technique for generating intuitive and understandable explanations. By utilizing LIME to the predictions made by XGBoost, we enable decision-makers to gain intuition into the decision process of the model and the reasons behind churn predictions. Through a comprehensive case study on customer churn data, we demonstrate the success of our explainable ML approach. Our methodology not only achieves high prediction accuracy but also offers interpretable explanations that highlight the underlying drivers of customer churn. These insights supply valuable management for decision-making processes within supply chain management.

Read More

Doi: https://doi.org/10.54216/JAIM.040102

Vol. 4 Issue. 1 PP. 16-23, (2023)

Interpreting the Incomprehensible: Benchmarking Visual Explanation Methods for Deep Convolutional Networks

Wei Hong Lim , Marwa M. Eid

Deep Convolutional Networks (CNNs) have revolutionized various fields, including computer vision, but their decision-making process remains largely opaque. To address this interpretability challenge, numerous visual explanation methods have been proposed. However, a comprehensive evaluation and benchmarking of these methods are essential to understand their strengths, limitations, and comparative performance. In this paper, we present a systematic study that benchmarks and compares various visual explanation techniques for deep CNNs. We propose a standardized evaluation framework consisting of benchmark explain ability methods. Through extensive experiments, we analyze the effectiveness, and interpretability of popular visual explanation methods, including gradient-based methods, activation maximization, and attention mechanisms. Our results reveal nuanced differences between the methods, highlighting their trade-offs and potential applications. We conduce a comprehensive evaluation of visual explanation methods on different deep CNNs, the results demonstrate the ability to achieve informed selection and adoption of appropriate techniques for interpretability in real-world applications.

Read More

Doi: https://doi.org/10.54216/JAIM.040103

Vol. 4 Issue. 1 PP. 24-33, (2023)

Visualizing the Unseen: Exploring GRAD-CAM for Interpreting Convolutional Image Classifiers

Sunil Kumar , Abdelaziz A. Abdelhamid , Zahraa Tarek

Mathematical programming can express competency concepts in a well-defined mathematical model for a particular. Convolutional Neural Networks (CNNs) and other deep learning models have shown exceptional performance in image categorization tasks. However, questions about their interpretability and reliability are raised by their intrinsic complexity and black-box nature. In this study, we explore the visualization method of Gradient-Weighted Class Activation Mapping (GRAD-CAM) and its application to understanding how CNNs make decisions. We start by explaining why tools like GRAD-CAM are necessary for deep learning and why interpretability is so important. In this article, we provide a high-level introduction to CNN architecture, focusing on the significance of convolutional layers, pooling layers, and fully connected layers in the context of image categorization. Using the Xception model as an illustration, we describe how to generate GRAD-CAM heatmaps to highlight key areas in a picture. We highlight the benefits of GRAD-CAM in terms of localization accuracy and interpretability by comparing it to other visualization techniques like Class Activation Mapping (CAM) and Guided Backpropagation. We also investigate GRAD-CAM's potential uses in other areas of image classification, such as medical imaging, object recognition, and fine-grained classification. We also highlight the disadvantages of GRAD-CAM, such as its vulnerability to adversarial examples and occlusions, along with its advantages. We conclude by highlighting extensions and changes planned to address these shortcomings and strengthen the credibility of GRAD-CAM justifications. As a result of the work presented in this research, we can now analyze and improve Convolutional Image Classifiers with greater accuracy and transparency.

Read More

Doi: https://doi.org/10.54216/JAIM.040104

Vol. 4 Issue. 1 PP. 34-42, (2023)

Mining Sematic Association Rules from RDF Data

Nima Khodadadi , M. G. El-Mahgoub , Rokaia M. Zaki

Many fields rely heavily on the accurate and consistent portrayal of structured data. In order to effectively express and link information on the Semantic Web, RDF (Resource Description Framework) data is essential. Here, we present a process for extracting semantic association rules from RDF data. For our method, we employ the Apriori algorithm to mine the RDF triples for hidden connections between ideas and relationships. Using metrics such as confidence, support, and lift, we examine how well our model performs. We also give visual representations, like as scatter plots and clustered matrices, to make the correlations easier to understand and analyse. The findings validate our model's potential to unearth significant relationships, which in turn reveal important details about the RDF data's underlying semantics. Our findings are discussed, and suggestions for further study are provided.

Read More

Doi: https://doi.org/10.54216/JAIM.040105

Vol. 4 Issue. 1 PP. 43-51, (2023)