Multi-feature-based Automatic Targetless Camera-LiDAR Extrinsic Calibration

More Info
expand_more

Abstract

In autonomous driving, environmental perception, crucial for navigation and decision-making, depends on integrating data from multiple sensors like cameras and LiDAR. Camera-LiDAR fusion combines detailed imagery with precise depth, improving environmental awareness. Effective data fusion requires accurate extrinsic calibration to align camera and LiDAR data under one coordinate system. We aim to calibrate the camera and LiDAR extrinsic automatically and without specific targets. Targetless, non-automated calibration methods are time-consuming and labor-intensive. Existing advanced methods have proven that automatic calibration methods based on edge features are effective, and most focus on the extraction and matching of single features. The proposed method matches 2D edges from LiDAR's multi-attribute density map with image-derived intensity gradient and semantic edges, facilitating 2D-2D edge registration. We innovate by incorporating semantic feature and addressing random initial setting through the PnP problem of centroid pairs, enhancing the convergence of the objective function. We introduce a weighted multi-frame averaging technique, considering frame correlation and semantic importance, for smoother calibration. Tested on the KITTI dataset, it surpasses four current methods in single-frame tests and shows more robustness in multi-frame tests than MulFEAT.
Our algorithm leverages semantic information for extrinsic calibration, striking a balance between network complexity and robustness. Future enhancements may include using machine learning to convert sparse matrices to dense formats for improved optimization efficiency.

Files

Thesis_CHEN_FinalV.pdf
(pdf | 80.7 Mb)
Unknown license