Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation

From Events to Global Motion Perception

Journal Article (2019)
Authors

Federico Paredes Valles (TU Delft - Control & Simulation)

Kirk Scheper (TU Delft - Control & Simulation)

G. C. H. E. de Croon (TU Delft - Control & Simulation)

Research Group
Control & Simulation
Copyright
© 2019 Federico Paredes-Vallés, K.Y.W. Scheper, G.C.H.E. de Croon
To reference this document use:
https://doi.org/10.1109/TPAMI.2019.2903179
More Info
expand_more
Publication Year
2019
Language
English
Copyright
© 2019 Federico Paredes-Vallés, K.Y.W. Scheper, G.C.H.E. de Croon
Research Group
Control & Simulation
Issue number
8
Volume number
42 (2020)
Pages (from-to)
2051-2064
DOI:
https://doi.org/10.1109/TPAMI.2019.2903179
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The combination of spiking neural networks and event-based vision sensors holds the potential of highly efficient and high-bandwidth optical flow estimation. This paper presents the first hierarchical spiking architecture in which motion (direction and speed) selectivity emerges in an unsupervised fashion from the raw stimuli generated with an event-based camera. A novel adaptive neuron model and stable spike-timing-dependent plasticity formulation are at the core of this neural network governing its spike-based processing and learning, respectively. After convergence, the neural architecture exhibits the main properties of biological visual motion systems, namely feature extraction and local and global motion perception. Convolutional layers with input synapses characterized by single and multiple transmission delays are employed for feature and local motion perception, respectively; while global motion selectivity emerges in a final fully-connected layer. The proposed solution is validated using synthetic and real event sequences. Along with this paper, we provide the cuSNNlibrary, a framework that enables GPU-accelerated simulations of large-scale spiking neural networks. Source code and samples are available at https://github.com/tudelft/cuSNN.

Files

08660483.pdf
(pdf | 3.14 Mb)
License info not available