One of the central problems in continual learning is the loss of plasticity, which is the model’s inability to learn new tasks. Several approaches have been previously proposed, such as Continual Backpropagation (CBP). This algorithm uses utility scores, which represent how usefu
...
One of the central problems in continual learning is the loss of plasticity, which is the model’s inability to learn new tasks. Several approaches have been previously proposed, such as Continual Backpropagation (CBP). This algorithm uses utility scores, which represent how useful the individual neurons are for computing the answer. We have analysed such utility score distributions for different algorithms: backpropagation, L2 regularization, Shrink and Perturb, CBP, and its variants with L2 regularization and Shrink and Perturb. Our results reveal that well-performing algorithms maintain better-balanced utility score distributions and fewer neurons with scores near zero, indicating higher plasticity. In particular, CBP and its variants achieve better accuracy by actively redistributing utility and reinitializing underused neurons. These findings suggest that utility scores are a valuable analysis tool for understanding and improving continual learning systems.