Modeling Task Complexity in Crowdsourcing

Conference Paper (2016)
Author(s)

J Yang (TU Delft - Web Information Systems)

J.A. Redi (TU Delft - Multimedia Computing)

Gianluca Demartini (University of Sheffield)

A. Bozzon (TU Delft - Web Information Systems)

Research Group
Web Information Systems
More Info
expand_more
Publication Year
2016
Language
English
Research Group
Web Information Systems
Pages (from-to)
249-258

Abstract

Complexity is crucial to characterize tasks performed by humans through computer systems. Yet, the theory and practice of crowdsourcing currently lacks a clear understanding of task complexity, hindering the design of effective and efficient execution interfaces or fair monetary rewards. To understand how complexity is perceived and distributed over crowdsourcing tasks, we instrumented an experiment where we asked workers to evaluate the complexity of 61 real-world re-instantiated crowdsourcing tasks. We show that task complexity, while being subjective, is coherently perceived across workers; on the other hand, it is significantly influenced by task type. Next, we develop a high-dimensional regression model, to assess the influence of three classes of structural features (metadata, content, and visual) on task complexity, and ultimately use them to measure task complexity. Results show that both the appearance and the language used in task description can accurately predict task complexity. Finally, we apply the same feature set to predict task performance, based on a set of 5 years-worth tasks in Amazon MTurk. Results show that features related to task complexity can improve the quality of task performance prediction, thus demonstrating the utility of complexity as a task modeling property.

No files available

Metadata only record. There are no files for this record.