Explaining Two Strange Learning Curves

Conference Paper (2023)
Author(s)

Zhiyi Chen (Student TU Delft, ETH Zürich)

Marco Loog (Radboud Universiteit Nijmegen)

Jesse Krijthe (TU Delft - Pattern Recognition and Bioinformatics)

Research Group
Pattern Recognition and Bioinformatics
Copyright
© 2023 Zhiyi Chen, Marco Loog, J.H. Krijthe
DOI related publication
https://doi.org/10.1007/978-3-031-39144-6_2
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Zhiyi Chen, Marco Loog, J.H. Krijthe
Research Group
Pattern Recognition and Bioinformatics
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
Pages (from-to)
16-30
ISBN (print)
9783031391439
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Learning curves illustrate how generalization performance of a learner evolves with more training data. While this is a useful tool to characterize learners, not all learning curve behavior is well understood. For instance, it is sometimes assumed that the more training data provided, the better the learner performs. However, counter-examples exist for both classical machine learning algorithms and deep neural networks, where errors do not monotonically decrease with training set size. Loog et al. [12] describe this monotonicity problem, and present several regression examples where simple empirical risk minimizers display unexpected learning curve behaviors. In this paper, we will study two of these proposed problems in detail and explain what caused the odd learning curves. For the first, we use a bias-variance decomposition to show that the monotonic increase in the learning curve is caused by an increase in the variance, which we explain by a mismatch between the model and the data generating process. For the second problem, we explain the recurring increases in the learning curve by showing only two solutions are attainable by the learner. The probability of obtaining a configuration of training objects that leads to the high risk solution typically decreases as the training set size increases. However, for particular training set sizes, additional configurations that produce the high risk solution become possible. We prove that these additional configurations increase the probability of the high risk solution and therefore explain the unusual learning curve. These examples contribute to a more complete understanding of learning curves and the possibilities and reasons behind their various behaviors.

Files

978_3_031_39144_6_2.pdf
(pdf | 0.901 Mb)
- Embargo expired in 01-04-2024
License info not available