Why So Toxic?

Measuring and Triggering Toxic Behavior in Open-Domain Chatbots

Conference Paper (2022)
Author(s)

Wai Man Si (CISPA Helmholtz Center)

Michael Backes (CISPA Helmholtz Center)

Jeremy Blackburn (Binghamton University State University of New York)

Emiliano De Cristofaro (University College London)

Gianluca Stringhini (Boston University)

S. Zannettou (TU Delft - Organisation & Governance)

Y. Zhang (CISPA Helmholtz Center)

Research Group
Organisation & Governance
Copyright
© 2022 Wai Man Si, Michael Backes, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, S. Zannettou, Y. Zhang
DOI related publication
https://doi.org/10.1145/3548606.3560599
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Wai Man Si, Michael Backes, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, S. Zannettou, Y. Zhang
Research Group
Organisation & Governance
Pages (from-to)
2659-2673
ISBN (electronic)
9781450394505
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Chatbots are used in many applications, e.g., automated agents, smart home assistants, interactive characters in online games, etc. Therefore, it is crucial to ensure they do not behave in undesired manners, providing offensive or toxic responses to users. This is not a trivial task as state-of-the-art chatbot models are trained on large, public datasets openly collected from the Internet. This paper presents a first-of-its-kind, large-scale measurement of toxicity in chatbots. We show that publicly available chatbots are prone to providing toxic responses when fed toxic queries. Even more worryingly, some non-toxic queries can trigger toxic responses too. We then set out to design and experiment with an attack, ToxicBuddy, which relies on fine-tuning GPT-2 to generate non-toxic queries that make chatbots respond in a toxic manner. Our extensive experimental evaluation demonstrates that our attack is effective against public chatbot models and outperforms manually-crafted malicious queries proposed by previous work. We also evaluate three defense mechanisms against ToxicBuddy, showing that they either reduce the attack performance at the cost of affecting the chatbot's utility or are only effective at mitigating a portion of the attack. This highlights the need for more research from the computer security and online safety communities to ensure that chatbot models do not hurt their users. Overall, we are confident that ToxicBuddy can be used as an auditing tool and that our work will pave the way toward designing more effective defenses for chatbot safety.

Files

3548606.3560599.pdf
(pdf | 1.44 Mb)
- Embargo expired in 01-07-2023
License info not available