L.R. Engwegen
5 records found
1
Analyzing Plasticity Through Utility Scores
Comparing Continual Learning Algorithms via Utility Score Distributions
One of the central problems in continual learning is the loss of plasticity, which is the model’s inability to learn new tasks. Several approaches have been previously proposed, such as Continual Backpropagation (CBP). This algorithm uses utility scores, which represent how usefu
...
Continual Backpropagation (CBP) has recently been proposed as an effective method for mitigating loss of plasticity in neural networks trained in continual learning (CL) settings. While extensive experiments have been conducted to demonstrate the algorithm's ability to mitigate l
...
Maintaining Plasticity for Deep Continual Learning
Activation Function-Adapted Parameter Resetting Approaches
Standard deep learning utensils, in particular feed-forward artificial neural networks and the backpropagation algorithm, fail to adapt to sequential learning scenarios, where the model is continuously presented with new training data. Many algorithms that aim to solve this probl
...
Layerwise Perspective into Continual Backpropagation
Replacing the First Layer is All You Need
Continual learning faces a problem, known as plasticity loss, where models gradually lose the ability to adapt to new tasks. We investigate Continual Backpropagation (CBP) – a method that tackles plasticity loss by constantly resetting a small fraction of low-utility neurons. We
...
Deep learning systems are typically trained in static environments and fail to adapt when faced with a continuous stream of new tasks. Continual learning addresses this by allowing neural networks to learn sequentially without forgetting prior knowledge. However, such models ofte
...