Bridging The Domain Shift of CNN-Based Pose Estimation Systems in Active Debris Removal Scenarios

Abstract (2022)
Authors

Lorenzo Pasqualetto Cassinis (Space Systems Egineering)

Alessandra Menicucci (Space Systems Egineering)

Eberhard K A Gill (Space Systems Egineering)

Ingo Ahrns (AirBus Defence and Space GmbH)

Manuel Sanchez-Gestido (European Space Agency (ESA))

Affiliation
Space Systems Egineering
More Info
expand_more
Publication Year
2022
Language
English
Affiliation
Space Systems Egineering
Volume number
2022-September
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The estimation of the relative pose of an inactive spacecraft by an active servicer spacecraft is a critical task for close-proximity operations, such as In-Orbit Servicing and Active Debris Removal. Among all the challenges, the lack of available space images of the inactive satellite makes the on-ground validation of current monocular camera-based navigation systems a challenging task, mostly due to the fact that standard Image Processing (IP) algorithms, which are usually tested on synthetic images, tend to fail when implemented in orbit. In response to this need to guarantee a reliable validation of pose estimation systems, this paper presents the on-ground validation of a Convolutional Neural Network (CNN)-based monocular pose estimation system on representative rendezvous scenarios recreated in ESA's GNC Rendezvous, Approach and Landing Simulator (GRALS) testbed. Special focus is given on solving the domain shift problem which characterizes CNNs trained on synthetic datasets when tested on more realistic imagery. The validation of the proposed system is ensured by the introduction of a calibration framework, which returns an accurate reference relative pose between the target spacecraft and the camera for each lab-generated image, allowing a comparative assessment at a pose estimation level. The VICON Tracker System is used together with two KUKA robotic arms to respectively track and control the trajectory of the monocular camera around a scaled 1:25 mockup of the Envisat spacecraft. After an overview of the facility, this work describes a novel data augmentation technique focused on texture randomization, aimed at improving the CNN robustness against previously unseen target textures. Despite the feature detection challenges under extreme brightness and illumination conditions, the results on the high exposure scenario show that the proposed system is capable of bridging the domain shift from synthetic to lab-generated images, returning accurate pose estimates for more than 50% of the rendezvous trajectory images despite the large domain gaps in target textures and illumination conditions.

Files

License info not available