Fusion: Practice and Applications
FPA
2692-4048
2770-0070
10.54216/FPA
https://www.americaspg.com/journals/show/1310
2018
2018
Interpretable Machine Learning Fusion and Data Analytics Models for Anomaly Detection
Faculty of Computers and Informatics, Zagazig University, Zagazig, Sharqiyah, 44519, Egypt
Ahmed
Ahmed
Faculty of Computers and Informatics, Zagazig University, Zagazig, Sharqiyah, 44519, Egypt
Nehal N.
Mostafa
Explainable artificial intelligence received great research attention in the past few years during the widespread of Black-Box techniques in sensitive fields such as medical care, self-driving cars, etc. Artificial intelligence needs explainable methods to discover model biases. Explainable artificial intelligence will lead to obtaining fairness and Transparency in the model. Making artificial intelligence models explainable and interpretable is challenging when implementing black-box models. Because of the inherent limitations of collecting data in its raw form, data fusion has become a popular method for dealing with such data and acquiring more trustworthy, helpful, and precise insights. Compared to other, more traditional-based data fusion methods, machine learning's capacity to automatically learn from experience with nonexplicit programming significantly improves fusion's computational and predictive power. This paper comprehensively studies the most explainable artificial intelligent methods based on anomaly detection. We proposed the required criteria of the transparency model to measure the data fusion analytics techniques. Also, define the different used evaluation metrics in explainable artificial intelligence. We provide some applications for explainable artificial intelligence. We provide a case study of anomaly detection with the fusion of machine learning. Finally, we discuss the key challenges and future directions in explainable artificial intelligence.
2021
2021
54
69
10.54216/FPA.030104
https://www.americaspg.com/articleinfo/3/show/1310