Print Email Facebook Twitter Learning to recognise words using visually grounded speech Title Learning to recognise words using visually grounded speech Author Scholten, Sebastiaan (Student TU Delft) Merkx, Danny (Radboud Universiteit Nijmegen) Scharenborg, O.E. (TU Delft Multimedia Computing) Date 2021 Abstract We investigated word recognition in a Visually Grounded Speech model. The model has been trained on pairs of images and spoken captions to create visually grounded embeddings which can be used for speech to image retrieval and vice versa. We investigate whether such a model can be used to recognise words by embedding isolated words and using them to retrieve images of their visual referents. We investigate the time-course of word recognition using a gating paradigm and perform a statistical analysis to see whether well known word competition effects in human speech processing influence word recognition. Our experiments show that the model is able to recognise words, and the gating paradigm reveals that words can be recognised from partial input as well and that recognition is negatively influenced by word competition from the word initial cohort. Subject AnalysisFlickr8kRecurrent neural networkVisually grounded speech To reference this document use: http://resolver.tudelft.nl/uuid:de5e051d-2e43-43d9-858e-24abfd60f2ff DOI https://doi.org/10.1109/ISCAS51556.2021.9401692 Publisher IEEE, Piscataway ISBN 978-1-7281-9201-7 Source 2021 IEEE International Symposium on Circuits and Systems (ISCAS) Event 53rd IEEE International Symposium on Circuits and Systems, ISCAS 2021, 2021-05-22 → 2021-05-28, Virtual at Daegu, Korea, Republic of Bibliographical note Accepted author manuscript Part of collection Institutional Repository Document type conference paper Rights © 2021 Sebastiaan Scholten, Danny Merkx, O.E. Scharenborg Files PDF Learning_to_Recognise_Wor ... sually.pdf 1.51 MB Close viewer /islandora/object/uuid:de5e051d-2e43-43d9-858e-24abfd60f2ff/datastream/OBJ/view