Optimal Decision Tree Policies for Markov Decision Processes

Conference Paper (2023)
Author(s)

D.A. Vos (TU Delft - Cyber Security)

Sicco Verwer (TU Delft - Cyber Security)

Research Group
Cyber Security
Copyright
© 2023 D.A. Vos, S.E. Verwer
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 D.A. Vos, S.E. Verwer
Research Group
Cyber Security
Pages (from-to)
5457-5465
ISBN (electronic)
9781956792034
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Interpretability of reinforcement learning policies is essential for many real-world tasks but learning such interpretable policies is a hard problem. Particularly, rule-based policies such as decision trees and rules lists are difficult to optimize due to their non-differentiability. While existing techniques can learn verifiable decision tree policies, there is no guarantee that the learners generate a policy that performs optimally. In this work, we study the optimization of size-limited decision trees for Markov Decision Processes (MPDs) and propose OMDTs: Optimal MDP Decision Trees. Given a user-defined size limit and MDP formulation, OMDT directly maximizes the expected discounted return for the decision tree using Mixed-Integer Linear Programming. By training optimal tree policies for different MDPs we empirically study the optimality gap for existing imitation learning techniques and find that they perform sub-optimally. We show that this is due to an inherent shortcoming of imitation learning, namely that complex policies cannot be represented using size-limited trees. In such cases, it is better to directly optimize the tree for expected return. While there is generally a trade-off between the performance and interpretability of machine learning models, we find that on small MDPs, depth 3 OMDTs often perform close to optimally.

Files

2301.13185.pdf
(pdf | 1.71 Mb)
License info not available