Learning to recognise words using visually grounded speech

Conference Paper (2021)
Author(s)

Sebastiaan Scholten (Student TU Delft)

Danny Merkx (Radboud Universiteit Nijmegen)

Odette Scharenborg (TU Delft - Multimedia Computing)

Multimedia Computing
Copyright
© 2021 Sebastiaan Scholten, Danny Merkx, O.E. Scharenborg
DOI related publication
https://doi.org/10.1109/ISCAS51556.2021.9401692
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Sebastiaan Scholten, Danny Merkx, O.E. Scharenborg
Multimedia Computing
Bibliographical Note
Accepted author manuscript@en
ISBN (electronic)
978-1-7281-9201-7
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We investigated word recognition in a Visually Grounded Speech model. The model has been trained on pairs of images and spoken captions to create visually grounded embeddings which can be used for speech to image retrieval and vice versa. We investigate whether such a model can be used to recognise words by embedding isolated words and using them to retrieve images of their visual referents. We investigate the time-course of word recognition using a gating paradigm and perform a statistical analysis to see whether well known word competition effects in human speech processing influence word recognition. Our experiments show that the model is able to recognise words, and the gating paradigm reveals that words can be recognised from partial input as well and that recognition is negatively influenced by word competition from the word initial cohort.

Files

License info not available