What Is Unclear? Computational Assessment of Task Clarity in Crowdsourcing

Conference Paper (2021)
Author(s)

Zahra Nouri (Paderborn University)

Ujwal Gadiraju (TU Delft - Web Information Systems)

Gregor Engels (Paderborn University)

Henning Wachsmuth (Paderborn University)

Research Group
Web Information Systems
Copyright
© 2021 Zahra Nouri, Ujwal Gadiraju, Gregor Engels, Henning Wachsmuth
DOI related publication
https://doi.org/10.1145/3465336.3475109
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Zahra Nouri, Ujwal Gadiraju, Gregor Engels, Henning Wachsmuth
Research Group
Web Information Systems
Pages (from-to)
165-175
ISBN (electronic)
9781450385510
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Designing tasks clearly to facilitate accurate task completion is a challenging endeavor for requesters on crowdsourcing platforms. Prior research shows that inexperienced requesters fail to write clear and complete task descriptions which directly leads to low quality submissions from workers. By complementing existing works that have aimed to address this challenge, in this paper we study whether clarity flaws in task descriptions can be identified automatically using natural language processing methods. We identify and synthesize seven clarity flaws in task descriptions that are grounded in relevant literature. We build both BERT-based and feature-based binary classifiers, in order to study the extent to which clarity flaws in task descriptions can be computationally assessed, and understand textual properties of descriptions that affect task clarity. Through a crowdsourced study, we collect annotations of clarity flaws in 1332 real task descriptions. Using this dataset, we evaluate several configurations of the classifiers. Our results indicate that nearly all the clarity flaws in task descriptions can be assessed reasonably by the classifiers. We found that the content, style, and readability of tasks descriptions are particularly important in shaping their clarity. This work has important implications on the design of tools to help requesters in improving task clarity on crowdsourcing platforms. Flaw-specific properties can provide for valuable guidance in improving task descriptions.

Files

3465336.3475109.pdf
(pdf | 1.51 Mb)
- Embargo expired in 01-02-2022
License info not available