Prevalence of non-monotonicity in learning curves

More Info


Learning curves are useful to determine the amount of data needed for a certain performance. The conventional belief is that increasing the amount of data improves performance. However, recent work challenges this assumption, and shows nonmonotonic behaviors of certain learners on certain problems. This paper presents a new approach for detecting non-monotonicity in empirical learning curves. This method monitors the degree of monotonicity violation on non-monotonic intervals, using the performance difference. In addition, the accuracy of the algorithm is being assessed through a series of diverse experiments. The proposed algorithm is applied to a subset of the extensive Learning Curve Database (LCDB). The results indicate an experimental accuracy of 95.5% in identifying non-monotonicity within real learning curves. Importantly, the metric demonstrated its ability to distinguish genuine non-monotonic trends from minor fluctuations attributed to measurement errors.