Visual-Lidar Feature Detection for Relative Pose Estimation of an Unknown Spacecraft

More Info
expand_more

Abstract

There is no denying that the ever-increasing demand in space endeavours calls for more sustainable exploitation of the space environment. Approaches to mitigate space debris, such as ADR, are frequently suggested. Most of these proposals suggest a single sensor approach to provide accurate and continuous shape and pose estimation of the target. However, a more robust system can be developed by relying on input from multiple sensors with different modalities.

Compared to earlier literature, several alternative multimodal methods using visual-Lidar data have been researched during this thesis, of which the most promising method has been investigated in more detail. The method suggests to determine the 3D location of 2D features by projecting these on detected 3D planes, thereby fusing the visual-Lidar data at feature level. The visual-Lidar data has been acquired through both simulation in Blender using Blensor and through experimentation using an visual camera, a scanning Lidar and a robotic arm.

To verify and validate the proposed multimodal feature detection method, the detected 3D features are compared to the ground truth directly. Next to that, the method is also verified through analysis of the end-to-end process to estimate the relative pose of an unknown target. Where the resulting 3D features serve as input into an particle filter, combined with an EKF, based on the FastSLAM algorithm. The proposed method showed promising results, encouraging further research to determine the pose based on plane normal vectors for the relative pose estimation to operate during adverse illumination conditions.