Title
To Spike or Not To Spike: A Digital Hardware Perspective on Deep Learning Acceleration
Author
Ottati, F. (TU Delft Electronic Instrumentation; Politecnico di Torino)
Gao, Chang
Chen, Qinyu (University of Zürich)
Brignone, Giovanni (Politecnico di Torino)
Casu, Mario R. (Politecnico di Torino)
Eshraghian, Jason K. (University of California)
Lavagno, Luciano (Politecnico di Torino)
Date
2023
Abstract
As deep learning models scale, they become increasingly competitive from domains spanning from computer vision to natural language processing; however, this happens at the expense of efficiency since they require increasingly more memory and computing power. The power efficiency of the biological brain outperforms any large-scale deep learning (DL) model; thus, neuromorphic computing tries to mimic the brain operations, such as spike-based information processing, to improve the efficiency of DL models. Despite the benefits of the brain, such as efficient information transmission, dense neuronal interconnects, and the co-location of computation and memory, the available biological substrate has severely constrained the evolution of biological brains. Electronic hardware does not have the same constraints; therefore, while modeling spiking neural networks (SNNs) might uncover one piece of the puzzle, the design of efficient hardware backends for SNNs needs further investigation, potentially taking inspiration from the available work done on the artificial neural networks (ANNs) side. As such, when is it wise to look at the brain while designing new hardware, and when should it be ignored? To answer this question, we quantitatively compare the digital hardware acceleration techniques and platforms of ANNs and SNNs. As a result, we provide the following insights: (i) ANNs currently process static data more efficiently, (ii) applications targeting data produced by neuromorphic sensors, such as event-based cameras and silicon cochleas, need more investigation since the behavior of these sensors might naturally fit the SNN paradigm, and (iii) hybrid approaches combining SNNs and ANNs might lead to the best solutions and should be investigated further at the hardware level, accounting for both efficiency and loss optimization.
Subject
Artificial Neural Networks
Biological system modeling
Computational modeling
Deep Learning
Digital Hardware
Energy consumption
Memory management
Neuromorphic Computing
Neurons
Spiking Neural Networks
Task analysis
Training
To reference this document use:
http://resolver.tudelft.nl/uuid:50d9b3e8-990a-459f-88eb-1274d05a3a4f
DOI
https://doi.org/10.1109/JETCAS.2023.3330432
Embargo date
2024-05-13
ISSN
2156-3357
Source
IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 13 (4), 1015 - 1025
Bibliographical note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
Part of collection
Institutional Repository
Document type
journal article
Rights
© 2023 F. Ottati, Chang Gao, Qinyu Chen, Giovanni Brignone, Mario R. Casu, Jason K. Eshraghian, Luciano Lavagno