One of the problems in continual learning, where models are trained sequentially on tasks, is a sudden drop in performance after switching to a new task, called stability gap. The presence of stability gap likely indicates that training is not done optimally. In this work we aim
...
One of the problems in continual learning, where models are trained sequentially on tasks, is a sudden drop in performance after switching to a new task, called stability gap. The presence of stability gap likely indicates that training is not done optimally. In this work we aim to address stability gap problem by using sharpness-aware optimization that biases convergence to flat minima. While flat minima are known to mitigate forgetting, their role in ensuring stable learning during task transitions remains unexplored. Through systematic analysis of two Entropy-SGD and C-Flat, we demonstrate that sharpness-aware optimizers produce smoother learning trajectories with reduced instability after task switch. Furthermore, we show that C-Flat’s second-order curvature approximation provides additional stabilization, suggesting that efficient Hessian-aware methods offer advantages for continual learning. The source code is available at Stability-Gap-SAM.