Fully convolutional networks for street furniture identification in panorama images

Journal Article (2019)
Author(s)

Y. Ao (University of Twente)

Jinbu Wang (TU Delft - Optical and Laser Remote Sensing)

M. Zhou (Chinese Academy of Sciences)

RC Lindenbergh (TU Delft - Optical and Laser Remote Sensing)

M. Y. Yang (University of Twente)

Research Group
Optical and Laser Remote Sensing
Copyright
© 2019 Y. Ao, J. Wang, M. Zhou, R.C. Lindenbergh, M. Y. Yang
DOI related publication
https://doi.org/10.5194/isprs-archives-XLII-2-W13-13-2019
More Info
expand_more
Publication Year
2019
Language
English
Copyright
© 2019 Y. Ao, J. Wang, M. Zhou, R.C. Lindenbergh, M. Y. Yang
Research Group
Optical and Laser Remote Sensing
Issue number
2/W13
Volume number
XLII
Pages (from-to)
13-20
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Panoramic images are widely used in many scenes, especially in virtual reality and street view capture. However, they are new for street furniture identification which is usually based on mobile laser scanning point cloud data or conventional 2D images. This study proposes to perform semantic segmentation on panoramic images and transformed images to separate light poles and traffic signs from background implemented by pre-trained Fully Convolutional Networks (FCN). FCN is the most important model for deep learning applied on semantic segmentation for its end to end training process and pixel-wise prediction. In this study, we use FCN-8s model that pre-trained on cityscape dataset and finetune it by our own data. The results show that in both pre-trained model and fine-tuning, transformed images have better prediction results than panoramic images.