Acoustic recognition of motorized vehicles with a moving listener

More Info
expand_more

Abstract

New measures have to be taken to combat fatalities caused by traffic accidents. Intelligent vehicles have the potential to increase safety, but depend heavily on their automated perception ability.
Acoustic perception, an unused sensing modality in this field, has potential for the detection of nearby vehicles, an ability both human drivers and autonomous vehicles could use assistance with. In this thesis two existing datasets, AudioSet a large general purpose dataset and RoadCube a small dedicated vehicle recognition set, are evaluated. Furthermore commonly used acoustic features and classifier algorithm are evaluated. Special attention is given to the influence of a moving listener vehicle on the performance. For the evaluation a new dataset, DriveSound, is captured. It contains samples captured from a listener car, both when its moving or idle. Results show that RoadCube can be used for the detection of road vehicles, but only when the listener is idle. The best performing classifier from RoadCube, a Gaussian Mixture Model classifier surpassed classifiers trained on the evaluation dataset itself with a Matthews Correlation Coefficient (MCC) of 0.34. None of the classifiers performed well on the samples captured by a moving listener, except for the DriveSound-driving classifiers. The Support Vector Machine trained on this dataset attained a MCC of 0.56.

Files