Estimating Conversational Styles in Conversational Microtask Crowdsourcing

Journal Article (2020)
Author(s)

Sihang Qui (TU Delft - Web Information Systems)

Ujwal Gadiraju (TU Delft - Web Information Systems)

A. Bozzon (TU Delft - Human-Centred Artificial Intelligence, TU Delft - Web Information Systems)

Research Group
Web Information Systems
Copyright
© 2020 S. Qiu, Ujwal Gadiraju, A. Bozzon
DOI related publication
https://doi.org/10.1145/3392837
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 S. Qiu, Ujwal Gadiraju, A. Bozzon
Research Group
Web Information Systems
Issue number
CSCW1
Volume number
4
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Crowdsourcing marketplaces have provided a large number of opportunities for online workers to earn a living. To improve satisfaction and engagement of such workers, who are vital for the sustainability of the marketplaces, recent works have used conversational interfaces to support the execution of a variety of crowdsourcing tasks. The rationale behind using conversational interfaces stems from the potential engagement that conversation can stimulate. Prior works in psychology have also shown that ‘conversational styles’ can play an important role in communication. There are unexplored opportunities to estimate a worker’s conversational style with an end goal of improving worker satisfaction, engagement and quality. Addressing this knowledge gap, we investigate the role of conversational styles in conversational microtask crowdsourcing. To this end, we design a conversational interface which supports task execution, and we propose methods to
estimate the conversational style of a worker. Our experimental setup was designed to empirically observe how conversational styles of workers relate with quality-related outcomes. Results show that even a naive supervised classifier can predict the conversation style with high accuracy (80%), and crowd workers with an Involvement conversational style provided a significantly higher output quality, exhibited a higher user engagement and perceived less cognitive task load in comparison to their counterparts. Our findings have important implications on task design with respect to improving worker performance and their engagement in microtask crowdsourcing.

Files

3392837.pdf
(pdf | 1.44 Mb)
- Embargo expired in 08-11-2021
License info not available