Preprint / Version 1

Assessing XAI: Unveiling Evaluation Metrics for Local Explanation, Taxonomies, Key Concepts, and Practical Applications

##article.authors##

  • Md Abdul Kadir German Research Center for Artificial Intelligence
  • Amir Mosavi John von Neumann Faculty of Infromatics, Obuda University
  • Daniel Sonntag German Research Center for Artificial Intelligence

DOI:

https://doi.org/10.31224/2989

Keywords:

XAI, machine learning, explainable artificial intelligence, explainable AI, explainable machine learning

Abstract

Within the past few years, the accuracy of deep learning and machine learning models has been improving significantly while less attention has been paid to their responsibility, explainability, and interpretability. eXplainable Artificial Intelligence (XAI) methods, guidelines, concepts, and strategies offer the possibility of models' evaluation for improving fidelity, faithfulness, and overall explainability. Due to the diversity of data and learning methodologies, there needs to be a clear definition for the validity, reliability, and evaluation metrics of explainability. This article reviews evaluation metrics used for XAI through the PRISMA systematic guideline for a comprehensive and systematic literature review. Based on the results, this study suggests two taxonomy for the evaluation metrics. One taxonomy is based on the applications, and one is based on the evaluation metrics.

Downloads

Download data is not yet available.

Additional Files

Posted

2023-05-05