Evolving-to-Learn with Spiking Neural Networks

More Info
expand_more

Abstract

Inspired by the natural nervous system, synaptic plasticity rules are applied to train spiking neural networks. Different from learning algorithms such as propagation and evolution that are widely used to train spiking neural networks, synaptic plasticity rules learn the parameters with local information, making them suitable for online learning on neuromorphic hardware. However, when such rules are implemented to learn different new tasks, they usually require a significant amount of work on task-dependent fine-tuning. This thesis aims to make this process easier by employing an evolutionary algorithm that evolves suitable synaptic plasticity rules for the task at hand. More specifically, we provide a set of various local signals, a set of mathematical operators, and a global reward signal, after which a Cartesian genetic programming process finds an optimal learning rule from these components. In this work, we first test the algorithm in basic binary pattern classification tasks. Then, using this approach, we find learning rules that successfully solve an XOR and cart-pole task, and discover new learning rules that outperform the baseline rules from literature.

Files

Evolving_to_learn_with_spiking... (.pdf)
(.pdf | 9.67 Mb)
- Embargo expired in 25-01-2024