The growing use of artificial intelligence has led to a need to explain the inner logic of a learning algorithm in order to gain trust in it. Most of us don’t know what these algorithms are doing or how they are doing it, and since we aren’t in a position to understand them, a new area of research has developed aimed at explaining artificial intelligence. Addressing this evolving area, ADAPT researcher at TU Dublin, Dr Luca Longo, recently completed a systematic review of the area of eXplainable Artificial Intelligence (XAI) published on sciencedirect.com and in the Journal of Machine Learning and Knowledge Extraction. The research papers cluster all the scientific studies on XAI which have been scattered up to this point.
Explainable Artificial Intelligence (XAI) has experienced significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models that lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested, coupled with several studies attempting to define the concept of explainability and its evaluation.
This systematic review contributes to the body of knowledge by clustering all the scientific studies via a hierarchical system that classifies theories and notions related to the concept of explainability and the evaluation approaches for XAI methods.
The structure of this hierarchy builds on top of an exhaustive analysis of existing taxonomies and peer-reviewed scientific material. Findings suggest that scholars have identified numerous notions and requirements that an explanation should meet in order to be easily understandable by end-users and to provide actionable information that can inform decision making. The authors have also suggested various approaches to assess to what degree machine-generated explanations meet these demands. Overall, these approaches can be clustered into human-centred evaluations and evaluations with more objective metrics.
However, despite the vast body of knowledge developed around the concept of explainability, there is not a general consensus among scholars on how an explanation should be defined, and how its validity and reliability assessed. The review concludes by critically discussing these gaps and limitations, and it defines future research directions with explainability as the starting component of any artificial intelligent system.
The full papers can be read online:
Science Direct: https://www.sciencedirect.com/science/article/pii/S1566253521001093
Machine Learning and Knowledge Extraction Journal: https://www.mdpi.com/2504-4990/3/3/32