Sensor Fusion for Visual-Inertial Simultaneous Localisation and Mapping

Applied and tested on a small ground-based mini rover

More Info
expand_more

Abstract

The generation of a 3D map of an unseen environment, obtained through solving the SLAM problem, is a popular topic currently in the robotics domain. The Lunar Rover Mini (LRM) at the German Aerospace Center solves this problem using a RGB-D camera system, which is favourable in space applications due to its lightweight characteristics and energy-efficiency. Performing SLAM based on camera images is based on visual odometry: the science of estimating the rover’s trajectory trough a sequence of images. However, the dependency on a
single sensor to perform mapping and navigation poses a threat to the reliability of the system. To increase the reliability and robustness of the SLAM algorithm, an inertial measurement unit (IMU) is incorporated in the robot hardware.
This thesis describes the design for a visual-inertial SLAM algorithm that incorporates both visual and inertial measurements to solve the SLAM problem through performing tightly coupled sensor fusion, which estimates and corrects for IMU biases. The solution is based on a non-linear factor graph, which is a graphical model to represent the relationships between the
rover’s measurements and the unknown variables which are optimised for. This is done using the open-source GTSAM framework. Using experimental data, the robustness of the novel visual-inertial SLAM algorithm is demonstrated by simulating specific sensor failures. Moreover, the novel algorithm shows its capability to incorporate a degree of certainty regarding specific areas of the generated map, closely resembling how a human being would generate a
map of an unknown area.
An additional use case for tightly coupled sensor fusion is the increased accuracy of the estimated trajectory. Assuming Gaussian noise models for both measurement models, averaging the two can yield a higher accuracy than either of the two sensors could have obtained by itself. This hypothesis was tested in another experiment. As the main mechanism behind bias estimation is reducing the error between visual and inertial measurements, bias estimation is quickly affected by this drifting visual odometry, which in its turn deteriorates the accuracy of the visual-inertial odometry module. This observation proves that the bias estimation is not correlated to the underlying physical process, but is rather just a numerical value in the optimisation reducing the residual error. It raises the question whether this strategy of tightly coupled sensor fusion can actually be used to increase the accuracy of a visual odometry algorithm.