Learning fine-grained semantics in spoken language using visual grounding

More Info
expand_more

Abstract

In the case of unwritten languages, acoustic models cannot be trained in the standard way, i.e., using speech and textual transcriptions. Recently, several methods have been proposed to learn speech representations using images, i.e., using visual grounding. Existing studies have focused on scene images. Here, we investigate whether fine-grained semantic information, reflecting the relationship between attributes and objects, can be learned from spoken language. To this end, a Fine-grained Semantic Embedding Network (FSEN) for learning semantic representations of spoken language grounded by fine-grained images is proposed. For training, we propose an efficient objective function, which includes a matching constraint, an adversarial objective, and a classification constraint. The learned speech representations are evaluated using two tasks, i.e., speech-image cross-modal retrieval and speech-to-image generation. On the retrieval task, FSEN outperforms other state-of-the-art methods on both a scene image dataset and two fine-grained datasets. The image generation task shows that the learned speech representations can be used to generate high-quality and semantic-consistent fine-grained images. Learning fine-grained semantics from spoken language via visual grounding is thus possible.