How Do Neural Networks See Depth in Single Images?

Conference Paper (2019)
Author(s)

Tom van Dijk (TU Delft - Control & Simulation)

G.C.H.E. de Croon (TU Delft - Control & Simulation)

Research Group
Control & Simulation
Copyright
© 2019 Tom van Dijk, G.C.H.E. de Croon
DOI related publication
https://doi.org/10.1109/ICCV.2019.00227
More Info
expand_more
Publication Year
2019
Language
English
Copyright
© 2019 Tom van Dijk, G.C.H.E. de Croon
Research Group
Control & Simulation
Pages (from-to)
2183-2191
ISBN (electronic)
9781728148038
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Deep neural networks have lead to a breakthrough in depth estimation from single images. Recent work shows that the quality of these estimations is rapidly increasing. It is clear that neural networks can see depth in single images. However, to the best of our knowledge, no work currently exists that analyzes what these networks have learned. In this work we take four previously published networks and investigate what depth cues they exploit. We find that all networks ignore the apparent size of known obstacles in favor of their vertical position in the image. The use of the vertical position requires the camera pose to be known; however, we find that these networks only partially recognize changes in camera pitch and roll angles. Small changes in camera pitch are shown to disturb the estimated distance towards obstacles. The use of the vertical image position allows the networks to estimate depth towards arbitrary obstacles - even those not appearing in the training set - but may depend on features that are not universally present.

Files

License info not available
License info not available