MurTree

Optimal Decision Trees via Dynamic Programming and Search

Journal Article (2022)
Author(s)

Emir Demirovic (TU Delft - Algorithmics)

Anna Lukina (TU Delft - Algorithmics)

Emmanuel Hebrard (CNRS-LAAS)

Jeffrey Chan (Royal Melbourne Institute of Technology University)

James Bailey (University of Melbourne)

Christopher Leckie (University of Melbourne)

Kotagiri Ramamohanarao (University of Melbourne)

Peter J. Stuckey (Monash University)

Research Group
Algorithmics
Copyright
© 2022 E. Demirović, A. Lukina, Emmanuel Hebrard, Jeffrey Chan, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao, Peter J. Stuckey
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 E. Demirović, A. Lukina, Emmanuel Hebrard, Jeffrey Chan, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao, Peter J. Stuckey
Research Group
Algorithmics
Issue number
26
Volume number
23
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Decision tree learning is a widely used approach in machine learning, favoured in applications that require concise and interpretable models. Heuristic methods are traditionally used to quickly produce models with reasonably high accuracy. A commonly criticised point, however, is that the resulting trees may not necessarily be the best representation of the data in terms of accuracy and size. In recent years, this motivated the development of optimal classification tree algorithms that globally optimise the decision tree in contrast to heuristic methods that perform a sequence of locally optimal decisions. We follow this line of work and provide a novel algorithm for learning optimal classification trees based on dynamic programming and search. Our algorithm supports constraints on the depth of the tree and number of nodes. The success of our approach is attributed to a series of specialised techniques that exploit properties unique to classification trees. Whereas algorithms for optimal classification trees have traditionally been plagued by high runtimes and limited scalability, we show in a detailed experimental study that our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances, providing several orders of magnitude improvements and notably contributing towards the practical use of optimal decision trees.