An image-based method for the pairwise registration of mobile laser scanning point clouds

Master Thesis (2018)
Author(s)

A. Christodoulou (TU Delft - Architecture and the Built Environment)

Contributor(s)

Peter van van Oosterom – Mentor

Ravi Y. Ravi – Mentor

Peter Joosten – Mentor

Berry van Someren – Mentor

Faculty
Architecture and the Built Environment
Copyright
© 2018 Antria Christodoulou
More Info
expand_more
Publication Year
2018
Language
English
Copyright
© 2018 Antria Christodoulou
Graduation Date
30-10-2018
Awarding Institution
Delft University of Technology
Programme
['Geomatics']
Sponsors
CycloMedia Technology
Faculty
Architecture and the Built Environment
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In this thesis, an image-based method is proposed for solving the relative translation errors of 3D point clouds collected by Mobile Laser Scanning (MLS) techniques. The process of matching 3D point clouds is known as registration. Due to environment-depending limitations of the positioning component of MLS systems, when recording a certain scene more than once, point clouds recorded at different times tend to have mainly positioning distortions. Additionally, due to yaw-angle errors of the recording platform it is possible that the point clouds have small distortions around their Z axis or the point clouds are scanned with fuzziness. This project deals only with the translation errors. Various methods can be found in the literature to perform pair-wise registration of point clouds. Commonly, the challenge of aligning 3D features is tackled in 3D. Only a few techniques for registering point clouds in 2D have being explored. However, the approach presented in this thesis uses the attributes of the 3D points to generate and match 2D-projections, by employing a simple correlation technique instead of matching in 3D. As a result, the developed method depends more on the
number of pixels in the 2D-projections and less on the number of points in the point clouds. This leads to a more cost-efficient method in contrast to 3D registration techniques. The method uses this benefit to provide redundant translation parameters for each point cloud pair. Particularly, several images are created from each point cloud tile. The constructed images illustrate the density of the points, the intensity, the gradient of intensity, the depth, the gradient of the depth and the normal vectors of the points. As a result, the translation parameters are retrieved from the matching of various image-techniques and that is how redundant solutions are provided. Next, with the utilization of image-based evaluation criteria the reliable translation parameters are detected and only those are used to compute the final solutions. The reliable estimations are taken into account for the estimation of the final solutions. Since redundant solutions are provided, the confidence levels of each final estimation can be computed. In addition, an indication of robustness showing how many estimations where included for the computation of the final solution is included. As a result, the developed approach is capable of providing information about the precision and reliability of each pairwise registration. In such a way it is known which of the results can be used in a following step, such as a global registration. Furthermore, a 2D Gaussian elliptical fitting is used to obtain sub-pixel accuracy registration results, as the accuracy of the estimations is restricted to the pixel size of the generated images. It is proven that the developed image-based registration method has the capability to produce reliable matches when there is at least some overlap between two overlapping point clouds and corresponding objects between the point clouds are distinct in pairs of 2D projections. The technique developed for the computation of sub-pixel accuracy results seems to have potential, but further improvement is required.

Files

License info not available
License info not available
License info not available