Targetless Camera-LiDAR Calibration for Autonomous Systems

Master Thesis (2021)
Author(s)

B. ZHANG (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Raj T. Rajan – Mentor (TU Delft - Signal Processing Systems)

Richard Hendriks – Graduation committee member (TU Delft - Signal Processing Systems)

S. Speretta – Graduation committee member (TU Delft - Space Systems Egineering)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2021 Bichi ZHANG
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Bichi ZHANG
Graduation Date
12-10-2021
Awarding Institution
Delft University of Technology
Project
ADACORSA
Programme
Electrical Engineering | Circuits and Systems
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In recent decades, the field of autonomous driving has witnessed rapid development, benefiting from the development of artificial intelligence-related technologies such as machine learning. Autonomous perception in driving is a key challenge, in which multi-sensor fusion is a common feature. Due to the high resolution and rich information, the camera is one of the core perceptual sensor in autonomous systems. However, the camera provides no knowledge on distance (or depth), which is insufficient for the requirements of autonomous driving. On the other hand, LiDAR provides accurate distance measurements, however the information is sparse. The complementary characteristics of cameras and LiDAR have been exploited over the past decade for autonomous navigation. In order to be able to fuse the camera and LiDAR sensor system jointly, an efficient and accurate calibration process between sensors is essential. Conventional methods for calibrating the camera and LIDAR rely on deploying artificial objects, e.g., checkerboard, on the field. Given the impracticality of such solutions, targetless calibration solutions have been proposed over the past years, which require no human intervention and are readily applicable for various autonomous systems, e.g., automotive, drones, rovers, and robots.

In this thesis, we review and analyze several classic targetless calibration schemes. Based on some of their shortcomings, a new multi-feature workflow called MulFEA (Multi-Feature Edge Alignment) is proposed. MulFEA uses the cylindrical projection method to transform the 3D-2D calibration problem into a 2D-2D calibration problem and exploits a variety of LiDAR feature information to supplement the scarce LiDAR point cloud boundaries to achieve higher features similarity compared to camera images. In addition, a feature matching function with a precision factor is designed to improve the smoothness of the objective function solution space and reduce local optima. Our results are validated using the open-source KITTI dataset, and we compare our results with several existing targetless calibration methods. In many different types of roadway environments, our algorithm provides more reliable results regarding the shape of the objective function in the 6-DOF space, which is more conducive for the optimization algorithms to solve. In the end, we also analyze the shortcomings of our proposed solutions and put forward a prospect for future research in the field of joint camera-Lidar calibration algorithms.

Files

Thesis_Bichi_Zhang.pdf
(pdf | 23.2 Mb)
- Embargo expired in 01-03-2022
License info not available