Learning fine-grained semantics in spoken language using visual grounding

Conference Paper (2021)
Author(s)

X. Wang (Xi’an Jiaotong University)

Tian Tian (Student TU Delft)

Jihua Zhu (Xi’an Jiaotong University)

O.E. Scharenborg (TU Delft - Multimedia Computing)

Multimedia Computing
Copyright
© 2021 X. Wang, Tian Tian, Jihua Zhu, O.E. Scharenborg
DOI related publication
https://doi.org/10.1109/ISCAS51556.2021.9401232
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 X. Wang, Tian Tian, Jihua Zhu, O.E. Scharenborg
Multimedia Computing
Bibliographical Note
Accepted author manuscript@en
ISBN (electronic)
978-1-7281-9201-7
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In the case of unwritten languages, acoustic models cannot be trained in the standard way, i.e., using speech and textual transcriptions. Recently, several methods have been proposed to learn speech representations using images, i.e., using visual grounding. Existing studies have focused on scene images. Here, we investigate whether fine-grained semantic information, reflecting the relationship between attributes and objects, can be learned from spoken language. To this end, a Fine-grained Semantic Embedding Network (FSEN) for learning semantic representations of spoken language grounded by fine-grained images is proposed. For training, we propose an efficient objective function, which includes a matching constraint, an adversarial objective, and a classification constraint. The learned speech representations are evaluated using two tasks, i.e., speech-image cross-modal retrieval and speech-to-image generation. On the retrieval task, FSEN outperforms other state-of-the-art methods on both a scene image dataset and two fine-grained datasets. The image generation task shows that the learned speech representations can be used to generate high-quality and semantic-consistent fine-grained images. Learning fine-grained semantics from spoken language via visual grounding is thus possible.

Files

ISCAS2021_FSEN_Xinsheng.pdf
(pdf | 0.783 Mb)
License info not available