Development of a Platform for Stereo Visual Odometry based Platooning

More Info
expand_more

Abstract

As autonomous driving is a popular and ever growing field of research, real world experiments provide a required manner of testing. In this thesis a driving research platform is developed, with a focus on platooning using visual messaging. These visual messages are conveyed using LED matrices. This thesis proposes two methods of LED matrix detection using YOLOv2, one using a sliding window, and one using the entire image. Furthermore two ways of distance estimation are proposed, one using the centers of the estimation bounding boxes and one using the used camera proprietary toolbox depth map. Results from an online experiment show best results from the depth map based depth estimation. The LED matrix detection using a sliding window gave generally dependable results in different environments, at the cost of being computationally demanding. The detection using the entire image provided less consistent results, but was significantly less computationally demanding. In a second offline experiment using a preannotated validation dataset as groundtruth all LED matrices were detected for all detectors. The SqueezeNet based YOLOv2 detector using a sliding window had the best results between tested detectors, with the highest intersection over union between detection and groundtruth.