Print Email Facebook Twitter EV-LayerSegNet: Self-supervised Motion Segmentation using Event-based Cameras Title EV-LayerSegNet: Self-supervised Motion Segmentation using Event-based Cameras Author Farah, Youssef (TU Delft Aerospace Engineering) Contributor de Croon, G.C.H.E. (mentor) Mooij, E. (graduation committee) Ellerbroek, Joost (graduation committee) Paredes Valles, F. (graduation committee) Degree granting institution Delft University of Technology Corporate name Delft University of Technology Programme Aerospace Engineering Date 2023-07-13 Abstract Event cameras are novel bio-inspired sensors that capture motion dynamics with much higher temporal resolution than traditional cameras, since pixels react asynchronously to brightness changes. They are therefore better suited for tasks involving motion such as motion segmentation. However, training event-based networks still represents a difficult challenge, as obtaining ground truth is very expensive and error-prone. In this article, we introduce EV-LayerSegNet, the first self-supervised CNN for event-based motion segmentation. Inspired by a layered representation of the scene dynamics, we show that it is possible to learn affine optical flow and segmentation masks separately, and use them to deblur the input events. The deblurring quality is then measured and used as self-supervised learning loss. Subject Event-based visionSelf-supervised learningDeep LearningMotion SegmentationAffine layered motion model To reference this document use: http://resolver.tudelft.nl/uuid:bcac496c-6757-4067-b1dd-5d8356486bf8 Part of collection Student theses Document type master thesis Rights © 2023 Youssef Farah Files PDF Thesis_Report_Youssef_Farah.pdf 20.11 MB Close viewer /islandora/object/uuid:bcac496c-6757-4067-b1dd-5d8356486bf8/datastream/OBJ/view