Support Vector Machines (SVMs) are widely used in various domains, with their performance heavily dependent on hyperparameter selection. However, hyperparameter tuning is computationally demanding due to the SVM training complexity, which is at best $O(n^2)$, where $n$ represents
...
Support Vector Machines (SVMs) are widely used in various domains, with their performance heavily dependent on hyperparameter selection. However, hyperparameter tuning is computationally demanding due to the SVM training complexity, which is at best $O(n^2)$, where $n$ represents the number of training samples. To mitigate this challenge, we propose integrating a validation-based early stopping criterion into the Sequential Minimal Optimization (SMO) algorithm to enhance tuning efficiency.
We evaluate this approach within Random Search and Successive Halving frameworks, aiming to reduce tuning runtime while preserving model performance. We introduce a composite score function to facilitate a balanced assessment of accuracy and efficiency. Our empirical analysis reveals that incorporating early stopping into SMO significantly reduces hyperparameter tuning time under RS but provides limited benefits in successive halving, given its inherent efficiency. Additionally, while dataset characteristics influence the effectiveness of early stopping, we found evidence that dimensionality does not. We also observe that frequent early stopping objective assessments introduce computational overhead, which can offset runtime improvements. Reducing assessment frequency alleviates this issue, but it diminishes the effectiveness of early stopping.
Our findings highlight the potential of early stopping in SMO for optimizing SVM hyperparameter tuning, particularly within random search-based approaches. They also identify trade-offs in assessment frequency and dataset-specific factors.