Deep learning systems are typically trained in static environments and fail to adapt when faced with a continuous stream of new tasks. Continual learning addresses this by allowing neural networks to learn sequentially without forgetting prior knowledge. However, such models ofte
...
Deep learning systems are typically trained in static environments and fail to adapt when faced with a continuous stream of new tasks. Continual learning addresses this by allowing neural networks to learn sequentially without forgetting prior knowledge. However, such models often suffer from a gradual decline in learning ability, a phenomenon known as loss of plasticity. Recent work introduced Continual Backpropagation (CBP), which restores plasticity by fully reinitializing low-utility neurons. While this approach is effective, it can also disrupt the learning process. This research proposes and tests three less disruptive alternatives to full reinitialization: injecting Gaussian noise into weights, reinitializing weights from the original initialization distribution, and rescaling weights to match their initial variance. We evaluate these strategies using the Permuted MNIST benchmark. The present findings show that noise injection has results similar to original CBP, reinitializing weights from the original distribution shows a better performance, while weight rescaling performs much worse than CBP. This implies that less destructive methods can maintain plasticity effectively, with some alternatives offering better stability-plasticity trade-offs than CBP.