Model-specific Explainable Artificial Intelligence techniques: State-of-the-art, Advantages and Limitations
M.A. Khan (TU Delft - Electrical Engineering, Mathematics and Computer Science)
C. Lal – Mentor (TU Delft - Cyber Security)
M. Conti – Mentor (TU Delft - Cyber Security)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Artificial Intelligence (AI) and Machine learning (ML) applications are being widely used to solve different problems in different sectors. These applications have enabled the human-effort and involvement to be very low. The AI/ML systems
make their own predictions and do not require a great deal of human help. However, over the last few years several incidents of the developed systems
have led to questions regarding the transparency of those AI/ML systems. Without expertise, it is not always as straightforward to understand
certain predictions. This pressing issue has led to the emerging topic of Explainable Artificial Intelligence (XAI). In this research, we will present the current work on a specific type of XAI, namely model-specific XAI. Model-specific XAI techniques are particular to certain types of ML techniques. We will look into several recent model-specific XAI techniques and provide the advantages
and disadvantages. Within similarities we find that there is a set of general requirements that the techniques should adhere to (expertise, bias, time, privacy
and performance). We characterize the techniques in feature-based, concept-based and logic-based. With regard to future work, there is room for
improvement on several areas. For example, this includes work from exploring hybrid techniques to investigating how current techniques can improve
the privacy.