Decision Diagram Focused Learning
J. Schaap (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Mathijs M. De Weerdt – Mentor (TU Delft - Algorithmics)
J.G.M. van der Linden – Mentor (TU Delft - Algorithmics)
Konstantin Sidorov – Mentor (TU Delft - Algorithmics)
Thomas Abeel – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Decision-Focused Learning (DFL) focuses on a setting where a system gets as input some features and needs to predict coefficients to a downstream optimization problem. Classically, one would apply a two-stage solution, which trains the predictor as a regression task and only uses the optimizer during evaluation. However, the two-stage solution fails to optimize the downstream optimization problem. As such, one might use DFL techniques to train the predictor. Nonetheless, these fail to take the entire solution space into account and only optimize toward the optimal true solution, and as such, they might fail to optimize the total downstream value. Furthermore, these techniques are computationally expensive.
We tackle these two problems using Decision Diagrams (DDs) for DFL. DDs for DFL have three main benefits. First, the DD can be cached between runs, speeding up the training loop. Second, we present four novel loss functions that use DDs to reason efficiently over entire solution spaces. Furthermore, we introduce a novel method to relax the DDs to reduce solve-time during training.
We experimentally show that the DDs speed up the training loop substantially. We further show that the DFL losses perform on par with other state-of-the-art DFL losses. Finally, we experimentally show when and which losses work with relaxed DDs.