Tailored features for semantic segmentation wit a DGCNN using free training samples of a colored airborne point cloud

Journal Article (2020)
Author(s)

E. Widyaningrum (TU Delft - Optical and Laser Remote Sensing, Geospatial Information Agency)

M.K. Fajari (Geospatial Information Agency, Hochschule für Technik Stuttgart)

R. Lindenbergh (TU Delft - Optical and Laser Remote Sensing)

M Hahn (Hochschule für Technik Stuttgart)

Research Group
Optical and Laser Remote Sensing
Copyright
© 2020 E. Widyaningrum, M.K. Fajari, R.C. Lindenbergh, M. Hahn
DOI related publication
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-339-2020
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 E. Widyaningrum, M.K. Fajari, R.C. Lindenbergh, M. Hahn
Research Group
Optical and Laser Remote Sensing
Issue number
B2
Volume number
43
Pages (from-to)
339-346
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Automation of 3D LiDAR point cloud processing is expected to increase the production rate of many applications including automatic map generation. Fast development on high-end hardware has boosted the expansion of deep learning research for 3D classification and segmentation. However, deep learning requires large amount of high quality training samples. The generation of training samples for accurate classification results, especially for airborne point cloud data, is still problematic. Moreover, which customized features should be used best for segmenting airborne point cloud data is still unclear. This paper proposes semi-automatic point cloud labelling and examines the potential of combining different tailor-made features for pointwise semantic segmentation of an airborne point cloud. We implement a Dynamic Graph CNN (DGCNN) approach to classify airborne point cloud data into four land cover classes: bare-land, trees, buildings and roads. The DGCNN architecture is chosen as this network relates two approaches, PointNet and graph CNNs, to exploit the geometric relationships between points. For experiments, we train an airborne point cloud and co-aligned orthophoto of the Surabaya city area of Indonesia to DGCNN using three different tailor-made feature combinations: points with RGB (Red, Green, Blue) color, points with original LiDAR features (Intensity, Return number, Number of returns) so-called IRN, and points with two spectral colors and Intensity (Red, Green, Intensity) so-called RGI. The overall accuracy of the testing area indicates that using RGB information gives the best segmentation results of 81.05% while IRN and RGI gives accuracy values of 76.13%, and 79.81%, respectively.