Development of a Platform for Stereo Visual Odometry based Platooning

Master Thesis (2021)
Author(s)

S.J. van der Marel (TU Delft - Mechanical Engineering)

Contributor(s)

R. M.G. Ferrari – Mentor (TU Delft - Team Riccardo Ferrari)

Twan Keijzer – Graduation committee member (TU Delft - Team Riccardo Ferrari)

J. Alonso-Mora – Graduation committee member (TU Delft - Learning & Autonomous Control)

Faculty
Mechanical Engineering
Copyright
© 2021 Simon van der Marel
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Simon van der Marel
Graduation Date
25-06-2021
Awarding Institution
Delft University of Technology
Programme
['Mechanical Engineering | Systems and Control']
Faculty
Mechanical Engineering
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

As autonomous driving is a popular and ever growing field of research, real world experiments provide a required manner of testing. In this thesis a driving research platform is developed, with a focus on platooning using visual messaging. These visual messages are conveyed using LED matrices. This thesis proposes two methods of LED matrix detection using YOLOv2, one using a sliding window, and one using the entire image. Furthermore two ways of distance estimation are proposed, one using the centers of the estimation bounding boxes and one using the used camera proprietary toolbox depth map. Results from an online experiment show best results from the depth map based depth estimation. The LED matrix detection using a sliding window gave generally dependable results in different environments, at the cost of being computationally demanding. The detection using the entire image provided less consistent results, but was significantly less computationally demanding. In a second offline experiment using a preannotated validation dataset as groundtruth all LED matrices were detected for all detectors. The SqueezeNet based YOLOv2 detector using a sliding window had the best results between tested detectors, with the highest intersection over union between detection and groundtruth.

Files

Thesis_SvdMarel_Final.pdf
(pdf | 45.2 Mb)
License info not available