Cross-modal hybrid feature fusion for image-sentence matching

Journal Article (2021)
Author(s)

Xing Xu (University of Electronic Science and Technology of China)

Yifan Wang (University of Electronic Science and Technology of China)

Yixuan He (University of Electronic Science and Technology of China)

Yang Yang (University of Electronic Science and Technology of China)

A. Hanjalic (TU Delft - Intelligent Systems)

Heng Tao Shen (University of Electronic Science and Technology of China)

Department
Intelligent Systems
DOI related publication
https://doi.org/10.1145/3458281
More Info
expand_more
Publication Year
2021
Language
English
Department
Intelligent Systems
Issue number
4
Volume number
17

Abstract

Image-sentence matching is a challenging task in the field of language and vision, which aims at measuring the similarities between images and sentence descriptions. Most existing methods independently map the global features of images and sentences into a common space to calculate the image-sentence similarity. However, the image-sentence similarity obtained by these methods may be coarse as (1) an intermediate common space is introduced to implicitly match the heterogeneous features of images and sentences in a global level, and (2) only the inter-modality relations of images and sentences are captured while the intra-modality relations are ignored. To overcome the limitations, we propose a novel Cross-Modal Hybrid Feature Fusion (CMHF) framework for directly learning the image-sentence similarity by fusing multimodal features with inter- and intra-modality relations incorporated. It can robustly capture the high-level interactions between visual regions in images and words in sentences, where flexible attention mechanisms are utilized to generate effective attention flows within and across the modalities of images and sentences. A structured objective with ranking loss constraint is formed in CMHF to learn the image-sentence similarity based on the fused fine-grained features of different modalities bypassing the usage of intermediate common space. Extensive experiments and comprehensive analysis performed on two widely used datasets - Microsoft COCO and Flickr30K - show the effectiveness of the hybrid feature fusion framework in CMHF, in which the state-of-the-art matching performance is achieved by our proposed CMHF method.

No files available

Metadata only record. There are no files for this record.