Efficient Training of Robust Decision Trees Against Adversarial Examples

Conference Paper (2021)
Author(s)

Daniël Vos (TU Delft - Cyber Security)

S.E. Verwer (TU Delft - Cyber Security)

Research Group
Cyber Security
Copyright
© 2021 D.A. Vos, S.E. Verwer
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 D.A. Vos, S.E. Verwer
Research Group
Cyber Security
Pages (from-to)
702-703
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Recently it has been shown that many machine learning models are vulnerable to adversarial examples: perturbed samples that trick the model into misclassifying them. Neural networks have received much attention but decision trees and their ensembles achieve state-of-the-art results on tabular data, motivating research on their robustness. Recently the first methods have been proposed to train decision trees and their ensembles robustly [4, 3, 2, 1] but the state-of-the-art methods are expensive to run.

Files

License info not available