Matching images and text with multi-modal tensor fusion and re-ranking

Conference Paper (2019)
Author(s)

Tan Wang (University of Electronic Science and Technology of China)

A Hanjalic (TU Delft - Intelligent Systems)

Xu Xu (University of Electronic Science and Technology of China)

Heng Tao Shen (University of Electronic Science and Technology of China)

Yang Yang (University of Electronic Science and Technology of China)

Jingkuan Song (University of Electronic Science and Technology of China)

Department
Intelligent Systems
Copyright
© 2019 Tan Wang, A. Hanjalic, Xing Xu, Heng Tao Shen, Yang Yang, Jingkuan Song
DOI related publication
https://doi.org/10.1145/3343031.3350875
More Info
expand_more
Publication Year
2019
Language
English
Copyright
© 2019 Tan Wang, A. Hanjalic, Xing Xu, Heng Tao Shen, Yang Yang, Jingkuan Song
Department
Intelligent Systems
Pages (from-to)
12-20
ISBN (electronic)
9781450368896
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

A major challenge in matching images and text is that they have intrinsically different data distributions and feature representations. Most existing approaches are based either on embedding or classification, the first one mapping image and text instances into a common embedding space for distance measuring, and the second one regarding image-text matching as a binary classification problem. Neither of these approaches can, however, balance the matching accuracy and model complexity well. We propose a novel framework that achieves remarkable matching performance with acceptable model complexity. Specifically, in the training stage, we propose a novel Multi-modal Tensor Fusion Network (MTFN) to explicitly learn an accurate image-text similarity function with rank-based tensor fusion rather than seeking a common embedding space for each image-text instance. Then, during testing, we deploy a generic Cross-modal Re-ranking (RR) scheme for refinement without requiring additional training procedure. Extensive experiments on two datasets demonstrate that our MTFN-RR consistently achieves the state-of-the-art matching performance with much less time complexity.

Files

Mm2019_final_002_.pdf
(pdf | 5.11 Mb)
License info not available