Continual Backpropagation (CBP) has recently been proposed as an effective method for mitigating loss of plasticity in neural networks trained in continual learning (CL) settings. While extensive experiments have been conducted to demonstrate the algorithm's ability to mitigate l
...
Continual Backpropagation (CBP) has recently been proposed as an effective method for mitigating loss of plasticity in neural networks trained in continual learning (CL) settings. While extensive experiments have been conducted to demonstrate the algorithm's ability to mitigate loss of plasticity, its susceptibility to catastrophic forgetting remains unexamined. This work addresses this gap by systematically evaluating the magnitude of catastrophic forgetting in models trained with CBP and comparing it to four baseline algorithms. We demonstrate that CBP suffers from significantly higher forgetting compared to all tested baselines, particularly in long-term and periodically revisited task scenarios. Moreover, we find that specific hyperparameters of the algorithm have significant influence on the stability-plasticity trade-off. We further analyze the internal dynamics of CBP, identifying strong correlations between forgetting and metrics such as activation drift. Finally, we evaluate three modifications to CBP: noise injection, layer-specific replacement, and partial neuron replacement, and show that the modifications reduce forgetting while maintaining high plasticity.