Impact of time-discretization on the efficiency of continuous time Spiking Neural Networks

The effects of the time step size on the accuracy, sparsity and latency of the SNN

More Info
expand_more

Abstract

The increasing computational costs of training deep learning models have drawn more and more attention towards more power-efficient alternatives such as spiking neural networks (SNNs). SNNs are an artificial neural network that mimics the brain’s way of processing information. These models can be even more power-efficient when run on specialized hardware like digital neuromorphic chips. These chips are designed to handle the unique processing needs of SNNs like sparse and event-driven computations. A lot of the state-of-the-art performance of SNNs in recent research has been achieved through supervised learning models that leverage intricate error backpropagation techniques. These models impose specific constraints on the network and rely on continuous time to facilitate the backpropagation process. However, this fix imposes a new challenge when converting these mechanisms on a neuromorphic chip. Because time is discrete on hardware numerical errors can be introduced as we can not calculate the infinitely precise value of variables depending on time. This work proposes a time discretization technique that allows for fast and stable backpropagation and analyses the effects of it on the efficiency of the SNN. Specifically, we look at sparsity, latency, and accuracy as the main factors that explain the power efficiency of the model. The experiments show that choosing a suitable time step size can improve sparsity while maintaining a high level of accuracy. However, too large values affect the network’s ability to learn.