Targetless Camera-LiDAR Calibration for Autonomous Systems

More Info


In recent decades, the field of autonomous driving has witnessed rapid development, benefiting from the development of artificial intelligence-related technologies such as machine learning. Autonomous perception in driving is a key challenge, in which multi-sensor fusion is a common feature. Due to the high resolution and rich information, the camera is one of the core perceptual sensor in autonomous systems. However, the camera provides no knowledge on distance (or depth), which is insufficient for the requirements of autonomous driving. On the other hand, LiDAR provides accurate distance measurements, however the information is sparse. The complementary characteristics of cameras and LiDAR have been exploited over the past decade for autonomous navigation. In order to be able to fuse the camera and LiDAR sensor system jointly, an efficient and accurate calibration process between sensors is essential. Conventional methods for calibrating the camera and LIDAR rely on deploying artificial objects, e.g., checkerboard, on the field. Given the impracticality of such solutions, targetless calibration solutions have been proposed over the past years, which require no human intervention and are readily applicable for various autonomous systems, e.g., automotive, drones, rovers, and robots.

In this thesis, we review and analyze several classic targetless calibration schemes. Based on some of their shortcomings, a new multi-feature workflow called MulFEA (Multi-Feature Edge Alignment) is proposed. MulFEA uses the cylindrical projection method to transform the 3D-2D calibration problem into a 2D-2D calibration problem and exploits a variety of LiDAR feature information to supplement the scarce LiDAR point cloud boundaries to achieve higher features similarity compared to camera images. In addition, a feature matching function with a precision factor is designed to improve the smoothness of the objective function solution space and reduce local optima. Our results are validated using the open-source KITTI dataset, and we compare our results with several existing targetless calibration methods. In many different types of roadway environments, our algorithm provides more reliable results regarding the shape of the objective function in the 6-DOF space, which is more conducive for the optimization algorithms to solve. In the end, we also analyze the shortcomings of our proposed solutions and put forward a prospect for future research in the field of joint camera-Lidar calibration algorithms.