Evolving-to-Learn with Spiking Neural Networks
J. LU (TU Delft - Aerospace Engineering)
G.C.H.E. de Croon – Mentor (TU Delft - Control & Simulation)
J.J. Hagenaars – Graduation committee member (TU Delft - Control & Simulation)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Inspired by the natural nervous system, synaptic plasticity rules are applied to train spiking neural networks. Different from learning algorithms such as propagation and evolution that are widely used to train spiking neural networks, synaptic plasticity rules learn the parameters with local information, making them suitable for online learning on neuromorphic hardware. However, when such rules are implemented to learn different new tasks, they usually require a significant amount of work on task-dependent fine-tuning. This thesis aims to make this process easier by employing an evolutionary algorithm that evolves suitable synaptic plasticity rules for the task at hand. More specifically, we provide a set of various local signals, a set of mathematical operators, and a global reward signal, after which a Cartesian genetic programming process finds an optimal learning rule from these components. In this work, we first test the algorithm in basic binary pattern classification tasks. Then, using this approach, we find learning rules that successfully solve an XOR and cart-pole task, and discover new learning rules that outperform the baseline rules from literature.