The role of membrane time constant in the training of spiking neural networks

Improving accuracy by per-neuron learning

More Info
expand_more

Abstract

Spiking neural networks (SNNs) aim to utilize mechanisms from biological neurons to bridge the computational and efficiency gaps between the human brain and machine learning systems. The widely used Leaky-Integrate-and-Fire (LIF) neuron model accumulates input spikes into an exponentially decaying membrane potential and generates a spike when this potential exceeds a set threshold. A LIF neuron is characterized by learnable input weights and a manually selected membrane time constant 𝜏, which determines the decay rate of a neuron's membrane potential. Previous work introduced the Parametric LIF (PLIF) neuron model with a learnable 𝜏. However, the published experiments only featured a single 𝜏 per spiking layer. This leaves space for exploration of the effect of having a learnable 𝜏 for each neuron. The importance of 𝜏 is given by the trade-off it inherently introduces, prioritizing the neuron's capability to convey information about spatial or temporal features. This work examines the effect of introducing a learnable 𝜏 per neuron with a new initialization method and a new regularization term which incentivizes low variance in each PLIF layer. The experiments are done using the DVS128 Gesture dataset and compared to a baseline model from the original paper introducing the PLIF neuron model. Results are inconclusive but suggest that introducing 𝜏 per neuron does not have a significant effect on the accuracy of a spiking neural network. Moreover, the evolution of 𝜏 during training exhibits interesting behavior and leads to two new hypotheses.