Safeguarding Crowdsourcing Surveys from ChatGPT through Prompt Injection

Journal Article (2025)
Author(s)

Chaofan Wang (TU Delft - Human-Centred Artificial Intelligence, Wenzhou University)

Samuel Kernan Freire (Knowledge and Intelligence Design, De Haagse Hogeschool)

Mo Zhang (University of Melbourne, University of Birmingham)

Jing Wei (University of Melbourne)

Jorge Goncalves (University of Melbourne)

Vassilis Kostakos (University of Melbourne)

Alessandro Bozzon (TU Delft - Sustainable Design Engineering)

Evangelos Niforatos (Knowledge and Intelligence Design)

Research Group
Human-Centred Artificial Intelligence
DOI related publication
https://doi.org/10.1145/3757503
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Human-Centred Artificial Intelligence
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository as part of the Taverne amendment. More information about this copyright law amendment can be found at https://www.openaccess.nl. Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
Journal title
Proceedings of the ACM on Human-Computer Interaction
Issue number
7
Volume number
9
Article number
CSCW322
Downloads counter
149
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

ChatGPT and other large language models (LLMs) have proven useful in crowdsourcing tasks, where they can effectively annotate machine learning training data. However, this means that they also have the potential for misuse, specifically to automatically answer surveys. LLMs can potentially circumvent quality assurance measures, thereby threatening the integrity of methodologies that rely on crowdsourcing surveys. In this paper, we propose a mechanism to detect LLM-generated responses to surveys. The mechanism uses ''prompt injection,'' such as directions that can mislead LLMs into giving predictable responses. We evaluate our technique against a range of question scenarios, types, and positions, and find that it can reliably detect LLM-generated responses with more than 98% effectiveness. We also provide an open-source software to help survey designers use our technique to detect LLM responses. Our work is a step in ensuring that survey methodologies remain rigorous vis-a-vis LLMs.

Files

3757503.pdf
(pdf | 1.61 Mb)
License info not available
warning

File under embargo until 16-04-2026