Print Email Facebook Twitter Tailored features for semantic segmentation wit a DGCNN using free training samples of a colored airborne point cloud Title Tailored features for semantic segmentation wit a DGCNN using free training samples of a colored airborne point cloud Author Widyaningrum, E. (TU Delft Optical and Laser Remote Sensing; Geospatial Information Agency) Fajari, M.K. (Hochschule für Technik Stuttgart; Geospatial Information Agency) Lindenbergh, R.C. (TU Delft Optical and Laser Remote Sensing) Hahn, M. (Hochschule für Technik Stuttgart) Date 2020 Abstract Automation of 3D LiDAR point cloud processing is expected to increase the production rate of many applications including automatic map generation. Fast development on high-end hardware has boosted the expansion of deep learning research for 3D classification and segmentation. However, deep learning requires large amount of high quality training samples. The generation of training samples for accurate classification results, especially for airborne point cloud data, is still problematic. Moreover, which customized features should be used best for segmenting airborne point cloud data is still unclear. This paper proposes semi-automatic point cloud labelling and examines the potential of combining different tailor-made features for pointwise semantic segmentation of an airborne point cloud. We implement a Dynamic Graph CNN (DGCNN) approach to classify airborne point cloud data into four land cover classes: bare-land, trees, buildings and roads. The DGCNN architecture is chosen as this network relates two approaches, PointNet and graph CNNs, to exploit the geometric relationships between points. For experiments, we train an airborne point cloud and co-aligned orthophoto of the Surabaya city area of Indonesia to DGCNN using three different tailor-made feature combinations: points with RGB (Red, Green, Blue) color, points with original LiDAR features (Intensity, Return number, Number of returns) so-called IRN, and points with two spectral colors and Intensity (Red, Green, Intensity) so-called RGI. The overall accuracy of the testing area indicates that using RGB information gives the best segmentation results of 81.05% while IRN and RGI gives accuracy values of 76.13%, and 79.81%, respectively. Subject aerial photosAirborne point cloudDGCNNfeature combinationssemantic segmentation To reference this document use: http://resolver.tudelft.nl/uuid:33cce4e8-3809-4507-a59f-b6b7051b95f2 DOI https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-339-2020 ISSN 1682-1750 Source International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 43 (B2), 339-346 Event 24th ISPRS Congress 2020 (Virtual), 2020-08-31 → 2020-09-02, Nice, France Part of collection Institutional Repository Document type journal article Rights © 2020 E. Widyaningrum, M.K. Fajari, R.C. Lindenbergh, M. Hahn Files PDF isprs_archives_XLIII_B2_2 ... 9_2020.pdf 2.46 MB Close viewer /islandora/object/uuid:33cce4e8-3809-4507-a59f-b6b7051b95f2/datastream/OBJ/view