CNN-based Ego-Motion Estimation for Fast MAV Maneuvers

Conference Paper (2021)
Authors

Yingfu Xu (TU Delft - Control & Simulation)

G. C. H. E. de Croon (TU Delft - Control & Simulation)

Research Group
Control & Simulation
To reference this document use:
https://doi.org/10.1109/ICRA48506.2021.9561714
More Info
expand_more
Publication Year
2021
Language
English
Research Group
Control & Simulation
Pages (from-to)
7606-7612
ISBN (print)
978-1-7281-9078-5
ISBN (electronic)
978-1-7281-9077-8
DOI:
https://doi.org/10.1109/ICRA48506.2021.9561714

Abstract

In the field of visual ego-motion estimation for Micro Air Vehicles (MAVs), fast maneuvers stay challenging mainly because of the big visual disparity and motion blur. In the pursuit of higher robustness, we study convolutional neural networks (CNNs) that predict the relative pose between subsequent images from a fast-moving monocular camera facing a planar scene. Aided by the Inertial Measurement Unit (IMU), we mainly focus on translational motion. The networks we study have similar small model sizes (around 1.35MB) and high inference speeds (around 10 milliseconds on a mobile GPU). Images for training and testing have realistic motion blur. Departing from a network framework that iteratively warps the first image to match the second with cascaded network blocks, we study different network architectures and training strategies. Simulated datasets and a self-collected MAV flight dataset are used for evaluation. The proposed setup shows better accuracy over existing networks and traditional feature-point-based methods during fast maneuvers. Moreover, self-supervised learning outperforms supervised learning. Videos and open-sourced code are available at https://github. com/tudelft/PoseNet_Planar

No files available

Metadata only record. There are no files for this record.