Searched for: subject%3A%22Perceptual%255C%2Blearning%22
(1 - 5 of 5)
document
Drozdova, Polina (author), van Hout, Roeland (author), Mattys, Sven (author), Scharenborg, O.E. (author)
There is ample evidence that both native and non-native listeners deal with speech variation by quickly tuning into a speaker and adjusting their phonetic categories according to the speaker's ambiguous pronunciation. This process is called lexically-guided perceptual learning. Moreover, the presence of noise in the speech signal has...
journal article 2021
document
Scharenborg, O.E. (author), Koemans, Jiska (author), Smith, Cybelle (author), Hasegawa-Johnson, Mark (author), Federmeier, Kara D. (author)
There is ample evidence showing that listeners are able to quickly adapt their phoneme classes to ambiguous sounds using a process called lexically-guided perceptual learning. This paper presents the first attempt to examine the neural correlates underlying this process. Specifically, we compared the brain’s responses to ambiguous [f/s] sounds...
conference paper 2019
document
Scharenborg, O.E. (author)
For most languages in the world and for speech that deviates from the standard pronunciation, not enough (annotated) speech data is available to train an automatic speech recognition (ASR) system. Moreover, human intervention is needed to adapt an ASR system to a new language or type of speech. Human listeners, on the other hand, are able to...
conference paper 2019
document
Ni, Junrui (author), Hasegawa-Johnson, Mark (author), Scharenborg, O.E. (author)
Both human listeners and machines need to adapt their sound categories whenever a new speaker is encountered. This perceptual learning is driven by lexical information. In previous work, we have shown that deep neural network-based (DNN) ASR systems can learn to adapt their phoneme category boundaries from a few labeled examples after exposure ...
conference paper 2019
document
Scharenborg, O.E. (author), Tiesmeyer, Sebastian (author), Hasegawa-Johnson, Mark (author), Dehak, Najim (author)
Both human listeners and machines need to adapt their sound categories whenever a new speaker is encountered. This perceptual learning is driven by lexical information. The aim of this paper is two-fold: investigate whether a deep neural network-based (DNN) ASR system can adapt to only a few examples of ambiguous speech as humans have been found...
conference paper 2018
Searched for: subject%3A%22Perceptual%255C%2Blearning%22
(1 - 5 of 5)