A Comparative Analysis of Learning Curve Models and their Applicability in Different Scenarios

Finding datasets patterns which lead to certain parametric curve model

More Info
expand_more

Abstract

Learning curves display predictions of the chosen model’s performance for different training set sizes. They can help estimate the amount of data required to achieve a minimal error rate, thus aiding in reducing the cost of data collection. However, our understanding and knowledge of the various shapes of learning curves and their applicability are still insufficient. Despite the presence of a curve that demonstrates a high level of accuracy on average, this parametric model can still exhibit inadequate performance in certain scenarios. Therefore, the objective of this research is to identify specific patterns in the datasets that influence the selection of a particular parametric curve model. To accomplish this, I conduct experiments to assess the performance of different parametric learning curves including power, exponential and Morgan-Mercer-Flodin (mmf) based on the number of features, classes, outliers, and machine learning models. I find that mmf and exponential curves outperform power law for all machine learning models. All curves work best with Logistic Regression, Bernoulli Naive Bayers and Multinomial Naive Bayers models. Exponential and mmf curves provide better results than power law for a small number of classes. Mmf also outperforms power law for the majority of numbers of features and outlier percentages.