Interpretability distinguishes decision trees from most other machine learning models; what they still have in common is that they are vulnerable to adversarial examples. Various robust decision tree algorithms exist; however, they either do not provide optimal results or are not
...
Interpretability distinguishes decision trees from most other machine learning models; what they still have in common is that they are vulnerable to adversarial examples. Various robust decision tree algorithms exist; however, they either do not provide optimal results or are not scalable with data that has continuous features. In this work, we demonstrate RobTree, a scalable optimal robust decision tree algorithm for continuous features. We propose new theorems that reduce the number of thresholds to be considered to half of what was previously considered and give way to pruning techniques. The results of this paper indicate that RobTree vastly outperforms the state-of-the-art in terms of runtime for trees of depth two up to two orders of magnitude.