Deep Vision-based Relative Localisation by Monocular Drones

More Info
expand_more

Abstract

Decentralised drone swarms need real time collision avoidance, thus requiring efficient, real time relative localisation. This paper explores different data inputs for vision based relative localisation. It introduces a novel dataset generated in Blender, providing ground truth optic flow and depth. Comparisons to MPI Sintel, an industry/research standard optic flow dataset, show it to be a challenging and realistic dataset. Two Deep Neural Network (DNN) architectures (YOLOv3 & U-Net) were trained on this data, comparing optic flow to colour images for relative positioning. The results indicate that using optic flow provides a significant advantage in relative localisation. The flow based YOLOv3 had an mAP of 48%, 9% better than the RGB based YOLOv3, and 23% better than its equivalent U-Net. Its IoU0.5 of 63% was also 14% better than the RGB based YOLOv3, and 51% than its equivalent U-Net. As an input, it generalises better than RGB, as test clips with variant drones show. For these variants, the optical flow based networks outperformed the RGB based networks by a factor of 10.

Files