The role of membrane time constant in the training of spiking neural networks
Improving accuracy by per-neuron learning
A. Pazderka (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Nergis TΓΆmen β Mentor (TU Delft - Pattern Recognition and Bioinformatics)
A. Micheli β Mentor (TU Delft - Pattern Recognition and Bioinformatics)
E.A. Markatou β Graduation committee member (TU Delft - Cyber Security)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Spiking neural networks (SNNs) aim to utilize mechanisms from biological neurons to bridge the computational and efficiency gaps between the human brain and machine learning systems. The widely used Leaky-Integrate-and-Fire (LIF) neuron model accumulates input spikes into an exponentially decaying membrane potential and generates a spike when this potential exceeds a set threshold. A LIF neuron is characterized by learnable input weights and a manually selected membrane time constant π, which determines the decay rate of a neuron's membrane potential. Previous work introduced the Parametric LIF (PLIF) neuron model with a learnable π. However, the published experiments only featured a single π per spiking layer. This leaves space for exploration of the effect of having a learnable π for each neuron. The importance of π is given by the trade-off it inherently introduces, prioritizing the neuron's capability to convey information about spatial or temporal features. This work examines the effect of introducing a learnable π per neuron with a new initialization method and a new regularization term which incentivizes low variance in each PLIF layer. The experiments are done using the DVS128 Gesture dataset and compared to a baseline model from the original paper introducing the PLIF neuron model. Results are inconclusive but suggest that introducing π per neuron does not have a significant effect on the accuracy of a spiking neural network. Moreover, the evolution of π during training exhibits interesting behavior and leads to two new hypotheses.