Print Email Facebook Twitter Baylime: Bayesian local interpretable model-agnostic explanations Title Baylime: Bayesian local interpretable model-agnostic explanations Author Zhao, Xingyu (Heriot-Watt University) Huang, Wei (University of Liverpool) Huang, Xiaowei (University of Liverpool) Robu, Valentin (TU Delft Algorithmics) Flynn, David (Heriot-Watt University) Contributor de Campos, Cassio (editor) Maathuis, Marloes H. (editor) Date 2021 Abstract Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research. In this paper, we develop a novel Bayesian extension to the LIME framework, one of the most widely used approaches in XAI – which we call BayLIME. Compared to LIME, BayLIME exploits prior knowledge and Bayesian reasoning to improve both the consistency in repeated explanations of a single prediction and the robustness to kernel settings. BayLIME also exhibits better explanation fidelity than the state-of-the-art (LIME, SHAP and Grad- CAM) by its ability to integrate prior knowledge from, e.g., a variety of other XAI techniques, as well as verification and validation (V&V) methods. We demonstrate the desirable properties of BayLIME through both theoretical analysis and extensive experiments. To reference this document use: http://resolver.tudelft.nl/uuid:94f2d335-b03d-4555-b71d-7f50f59ee0d4 Source Uncertainty in Artificial Intelligence, 27-30 July 2021, Online, 161 Event 37th International Conference on Uncertainty in Artificial Intelligence, 2021-07-26 → 2021-07-30 Series Proceedings of Machine Learning Research, 2640-3498, 161 Part of collection Institutional Repository Document type conference paper Rights © 2021 Xingyu Zhao, Wei Huang, Xiaowei Huang, Valentin Robu, David Flynn Files PDF zhao21a.pdf 1.71 MB Close viewer /islandora/object/uuid:94f2d335-b03d-4555-b71d-7f50f59ee0d4/datastream/OBJ/view