Geo-Distinctive Visual Element Matching for Location Estimation of Images

Journal Article (2018)
Author(s)

X Li (TU Delft - Multimedia Computing)

M. Larson (Radboud Universiteit Nijmegen, TU Delft - Multimedia Computing)

Alan Hanjalic (TU Delft - Intelligent Systems)

Multimedia Computing
Copyright
© 2018 X. Li, M.A. Larson, A. Hanjalic
DOI related publication
https://doi.org/10.1109/TMM.2017.2763323
More Info
expand_more
Publication Year
2018
Language
English
Copyright
© 2018 X. Li, M.A. Larson, A. Hanjalic
Multimedia Computing
Bibliographical Note
Accepted author manuscript@en
Issue number
5
Volume number
20
Pages (from-to)
1179-1194
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We propose an image representation and matching approach that substantially improves visual-based location estimation for images. The main novelty of the approach, called distinctive visual element matching (DVEM), is its use of representations that are specific to the query image whose location is being predicted. These representations are based on visual element clouds, which robustly capture the connection between the query and visual evidence from candidate locations. We then maximize the influence of visual elements that are geo-distinctive because they do not occur in images taken at many other locations. We carry out experiments and analysis for both geo-constrained and geo-unconstrained location estimation cases using two large-scale, publicly available datasets: the San Francisco Landmark dataset with 1.06 million street-view images and the MediaEval'15 Placing Task dataset with 5.6 million geo-tagged images from Flickr. We present examples that illustrate the highly transparent mechanics of the approach, which are based on commonsense observations about the visual patterns in image collections. Our results show that the proposed method delivers a considerable performance improvement compared to the state-of-the-art.

Files

08068212.pdf
(pdf | 2.01 Mb)
License info not available