Interpretability and performance of surrogate decision trees produced by Viper
O.K.N. Kaaij (TU Delft - Electrical Engineering, Mathematics and Computer Science)
A. Lukina – Mentor (TU Delft - Algorithmics)
Pradeep K. Murukannaiah – Graduation committee member (TU Delft - Interactive Intelligence)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Machine learning models are being used extensively in many high impact scenarios. Many of these models are ‘black boxes’, which are almost impossible to interpret. Successful implementations have been limited by this lack of interpretability. One approach to increasing interpretability is to use imitation learning to extract a more interpretable surrogate model from a black box model. Our aim is to evaluate Viper, an imitation learning algorithm, in terms of performance and interpretability. To achieve this, we evaluate surrogate decision tree models produced by Viper on three different environments and attempt to interpret these models. We find that Viper generally produces high performance interpretable decision trees, and that performance and interpretability are highly dependent on context and oracle quality. We compare Viper performance to similar
imitation learning approaches, and find that it performs as good as or better than these approaches, though our comparison is limited by the differences in oracle quality.