Print Email Facebook Twitter Local Explanations for Hyperspectral Image Classification Title Local Explanations for Hyperspectral Image Classification: Extending the Local Interpretable Model- Agnostic Explanation Approach with Principal Components Author Wendel, K. (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Tax, D.M.J. (mentor) Degree granting institution Delft University of Technology Programme Computer Science Date 2021-01-05 Abstract Despite widespread adoption of machine learning models in automatic decision making, many of them remain black boxes of which their inner workings are unknown. To be able to reason about given predictions, a local surrogate model can be used to approximate the local decision boundary of the black box. One particular local explanation method, Local Interpretable Model-Agnostic Explanations (LIME), generates artificial data around an instance, trains a model on a simplified interpretable data representation and the corresponding black box predictions, and explanations are given by seeing which features are important in the local model. However, this does assume that the generated data reflects the characteristics of the classification problem and that the simplified representation is able to capture these. In this work, the LIME local sampling procedure and interpretable representations are investigated for highly dimensional and highly correlated feature sets, in this case hyperspectral images. Based on the Pavia University dataset, it is illustrated that the sampling strategy currently used is not suitable for hyperspectral data. With the help of Principal Components (PC), a new sampling strategy is given, which improves local data generation for highly dimensional and correlated data. Furthermore, PC-LIME is proposed, which uses these PCs as the interpretable representation in LIME. With the new data sampling strategy, both PC-LIME and LIME are able to approximate the class posterior probability output of a black box. Furthermore, the resultant explanations by the local models were evaluated by domain experts, indicating that PC-LIME leads to intuitive explanations. Subject LIMEPCAExplainabilityLocal explanations To reference this document use: http://resolver.tudelft.nl/uuid:2ed39283-9099-4eb7-9974-cb22d80383d8 Part of collection Student theses Document type master thesis Rights © 2021 K. Wendel Files PDF KWE_thesis.pdf 3.03 MB Close viewer /islandora/object/uuid:2ed39283-9099-4eb7-9974-cb22d80383d8/datastream/OBJ/view