Making Learners (More) Monotone

Conference Paper (2020)
Author(s)

Tom Viering (TU Delft - Pattern Recognition and Bioinformatics)

A. Mey (TU Delft - Interactive Intelligence)

Marco Loog (University of Copenhagen, TU Delft - Pattern Recognition and Bioinformatics)

Research Group
Pattern Recognition and Bioinformatics
Copyright
© 2020 T.J. Viering, A. Mey, M. Loog
DOI related publication
https://doi.org/10.1007/978-3-030-44584-3_42
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 T.J. Viering, A. Mey, M. Loog
Research Group
Pattern Recognition and Bioinformatics
Bibliographical Note
Virtual/online event due to COVID-19 @en
Volume number
12080
Pages (from-to)
535-547
ISBN (print)
978-3-030-44583-6
ISBN (electronic)
978-3-030-44584-3
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Learning performance can show non-monotonic behavior. That is, more data does not necessarily lead to better models, even on average. We propose three algorithms that take a supervised learning model and make it perform more monotone. We prove consistency and monotonicity with high probability, and evaluate the algorithms on scenarios where non-monotone behaviour occurs. Our proposed algorithm MTHT makes less than 1% non-monotone decisions on MNIST while staying competitive in terms of error rate compared to several baselines. Our code is available at https://github.com/tomviering/monotone.