Deep Vanishing Point Detection

Geometric priors make dataset variations vanish

Conference Paper (2022)
Author(s)

Y. Lin (TU Delft - Pattern Recognition and Bioinformatics)

R.T. Wiersma (TU Delft - Computer Graphics and Visualisation)

Silvia L. Pintea (TU Delft - Pattern Recognition and Bioinformatics)

Klaus Hildebrandt (TU Delft - Computer Graphics and Visualisation)

E. Eisemann (TU Delft - Computer Graphics and Visualisation)

J.C. Gemert (TU Delft - Pattern Recognition and Bioinformatics)

Research Group
Computer Graphics and Visualisation
Copyright
© 2022 Y. Lin, R.T. Wiersma, S. Pintea, K.A. Hildebrandt, E. Eisemann, J.C. van Gemert
DOI related publication
https://doi.org/10.1109/CVPR52688.2022.00601
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Y. Lin, R.T. Wiersma, S. Pintea, K.A. Hildebrandt, E. Eisemann, J.C. van Gemert
Research Group
Computer Graphics and Visualisation
Pages (from-to)
6093-6103
ISBN (print)
978-1-6654-6947-0
ISBN (electronic)
978-1-6654-6946-3
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Deep learning has improved vanishing point detection in images. Yet, deep networks require expensive annotated datasets trained on costly hardware and do not generalize to even slightly different domains, and minor problem variants. Here, we address these issues by injecting deep vanishing point detection networks with prior knowledge. This prior knowledge no longer needs to be learned from data, saving valuable annotation efforts and compute, unlocking realistic few-sample scenarios, and reducing the impact of domain changes. Moreover, the interpretability of the priors allows to adapt deep networks to minor problem variations such as switching between Manhattan and non-Manhattan worlds. We seamlessly incorporate two geometric priors: (i) Hough Transform -- mapping image pixels to straight lines, and (ii) Gaussian sphere -- mapping lines to great circles whose intersections denote vanishing points. Experimentally, we ablate our choices and show comparable accuracy to existing models in the large-data setting. We validate our model's improved data efficiency, robustness to domain changes, adaptability to non-Manhattan settings.

Files

Deep_vanishing_point_detection... (pdf)
(pdf | 7.04 Mb)
- Embargo expired in 01-07-2023
License info not available