Enabling Human Computation through Text-based Conversational Agents

More Info
expand_more

Abstract

Human Computation (HC) has established itself to be a powerful tool for carrying out certain simple and repetitive tasks in the form of microtasks, which to this day are still difficult for a machine to automate.
With the latest increase in interest in machine learning, HC has similarly gotten more attention as a popular way to acquire training data in large quantities.
Traditional microtask crowdsourcing platforms, such as Amazon Mechanical Turk (AMT) or Figure Eight, are typically built using web-based interfaces.
However, the speed and quality of data acquired via the crowd are naturally limited by the number of available workers and their skill set.
We perceive a grand opportunity in expanding the crowd by exploring alternative means to the traditional microtask crowdsourcing platforms that are reliant on the web-based interface.
More specifically, as popular messaging services such as Telegram, WhatsApp and Facebook Messenger are used on a daily basis by millions of people across the world, we propose to perform HC activities inside these services through a text-based conversational agent (or chatbot).
We foresee new opportunities arising in conducting HC inside the chatbot, that could leverage the access to a potentially larger and more diverse crowd.
In this thesis, we set the first step towards a new alternative to the typical web-based interface used in HC.
As a result, we set out to investigate the viability of facilitating microtask crowdsourcing inside chatbots.
To this end, we design and implement a chatbot that acts as a medium for the execution of microtask crowdsourcing activities, which is then used for conducting several pilot experiments.
In addition, we propose a mapping from Web to Chatbot tasks for several commonly found User Interface (UI) elements inside worker interfaces.
Thereafter, we conduct an elaborate experimental campaign to gauge the feasibility and interest of crowd workers to use the chatbot as a new medium for performing generic microtasks.
We designed, implemented and executed six common microtask crowdsourcing types; Information Finding, human OCR (CAPTCHA), Sentiment Analysis, Object Labelling, Image Annotation, and Speech Transcription.
For each task type, we implemented a microtask in both a web-based and conversational interface.
By measuring the execution time, quality of answers and surveying workers’ satisfaction of a total of 316 distinct workers recruited via Figure Eight, we show that chatbots can be effectively used as an alternative to the web-based interface to perform microtask crowd work.
We report that out of all workers who participated in the chatbot tasks, 98.3% of the workers indicated a positive experience and were satisfied with their interaction with the chatbot, while performance in terms of task execution time and output quality was in general comparable.