Improving RGBD Indoor Mapping with IMU data

More Info
expand_more

Abstract

With the release of RGBD-cameras (cameras that provide both RGB as well as depth information) researchers have started evaluating how these devices can be used in fields of localization, mapping and ubiquitous computing. Intel Seattle Research proposed an indoor mapping algorithm making use of such a camera. The algorithm itself is vulnerable to violations of the static environment assumption and image based localization failures that can be caused by, for example, featureless environments. The goal of this master thesis is to augment the indoor mapping algorithm with additional Inertial Measurement Unit (IMU) data to enhance the robustness to these vulnerabilities. To this end the characteristics and limitations of the Microsoft Kinect are investigated and an enhanced mapping algorithm is proposed. IMU orientation estimates are fused with pose estimates based on image data, which give an initial guess to the Iterative Closest Point (ICP) algorithm that is used to align point cloud data to create a final map. In case visual localization fails, the algorithm of Intel uses a constant velocity assumption as fallback mechanism while the IMU data provide more accurate orientation estimations than the constant velocity assumption can provide. The IMU-enhanced algorithm shows similar mapping quality in ideal mapping conditions compared to the plain mapping algorithm. While a series of corner case tests show that the IMU-enhanced algorithm was unable to improve the results compared with the plain mapping algorithm, it potentially generates improvements in mapping quality when dealing with non-static environments.