Can You Hear It? Backdoor Attacks via Ultrasonic Triggers

Conference Paper (2022)
Author(s)

S. Koffas (TU Delft - Cyber Security)

Jing Xu (TU Delft - Cyber Security)

Mauro Conti (TU Delft - Cyber Security, Università degli Studi di Padova)

Stjepan Picek (TU Delft - Cyber Security, Radboud Universiteit Nijmegen)

Research Group
Cyber Security
Copyright
© 2022 S. Koffas, J. Xu, M. Conti, S. Picek
DOI related publication
https://doi.org/10.1145/3522783.3529523
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 S. Koffas, J. Xu, M. Conti, S. Picek
Research Group
Cyber Security
Pages (from-to)
57-62
ISBN (electronic)
978-1-4503-9277-8
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This work explores backdoor attacks for automatic speech recognition systems where we inject inaudible triggers. By doing so, we make the backdoor attack challenging to detect for legitimate users and, consequently, potentially more dangerous. We conduct experiments on two versions of a speech dataset and three neural networks and explore the performance of our attack concerning the duration, position, and type of the trigger. Our results indicate that less than 1% of poisoned data is sufficient to deploy a backdoor attack and reach a 100% attack success rate. We observed that short, non-continuous triggers result in highly successful attacks. Still, since our trigger is inaudible, it can be as long as possible without raising any suspicions making the attack more effective. Finally, we conduct our attack on actual hardware and saw that an adversary could manipulate inference in an Android application by playing the inaudible trigger over the air.