Print Email Facebook Twitter Targetless Camera-LiDAR Calibration for Autonomous Systems Title Targetless Camera-LiDAR Calibration for Autonomous Systems Author ZHANG, Bichi (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Rajan, R.T. (mentor) Hendriks, R.C. (graduation committee) Speretta, S. (graduation committee) Degree granting institution Delft University of Technology Programme Electrical Engineering | Circuits and Systems Project ADACORSA Date 2021-10-12 Abstract In recent decades, the field of autonomous driving has witnessed rapid development, benefiting from the development of artificial intelligence-related technologies such as machine learning. Autonomous perception in driving is a key challenge, in which multi-sensor fusion is a common feature. Due to the high resolution and rich information, the camera is one of the core perceptual sensor in autonomous systems. However, the camera provides no knowledge on distance (or depth), which is insufficient for the requirements of autonomous driving. On the other hand, LiDAR provides accurate distance measurements, however the information is sparse. The complementary characteristics of cameras and LiDAR have been exploited over the past decade for autonomous navigation. In order to be able to fuse the camera and LiDAR sensor system jointly, an efficient and accurate calibration process between sensors is essential. Conventional methods for calibrating the camera and LIDAR rely on deploying artificial objects, e.g., checkerboard, on the field. Given the impracticality of such solutions, targetless calibration solutions have been proposed over the past years, which require no human intervention and are readily applicable for various autonomous systems, e.g., automotive, drones, rovers, and robots. In this thesis, we review and analyze several classic targetless calibration schemes. Based on some of their shortcomings, a new multi-feature workflow called MulFEA (Multi-Feature Edge Alignment) is proposed. MulFEA uses the cylindrical projection method to transform the 3D-2D calibration problem into a 2D-2D calibration problem and exploits a variety of LiDAR feature information to supplement the scarce LiDAR point cloud boundaries to achieve higher features similarity compared to camera images. In addition, a feature matching function with a precision factor is designed to improve the smoothness of the objective function solution space and reduce local optima. Our results are validated using the open-source KITTI dataset, and we compare our results with several existing targetless calibration methods. In many different types of roadway environments, our algorithm provides more reliable results regarding the shape of the objective function in the 6-DOF space, which is more conducive for the optimization algorithms to solve. In the end, we also analyze the shortcomings of our proposed solutions and put forward a prospect for future research in the field of joint camera-Lidar calibration algorithms. Subject sensor fusionextrinsic calibrationcamera-LiDAR systemautonomous driving To reference this document use: http://resolver.tudelft.nl/uuid:b9da4c50-55c8-4d6b-9c20-68b51237db76 Embargo date 2022-03-01 Part of collection Student theses Document type master thesis Rights © 2021 Bichi ZHANG Files PDF Thesis_Bichi_Zhang.pdf 23.19 MB Close viewer /islandora/object/uuid:b9da4c50-55c8-4d6b-9c20-68b51237db76/datastream/OBJ/view