Using Sign-Language as an Input Modality for Microtask Crowdsourcing

More Info
expand_more

Abstract

Several input types have been developed in different technological landscapes like crowdsourcing and conversational agents. However, sign language remains one of the input types that has not been looked upon. Although numerous amount of people around the world use sign language as their primary language, there have not been many efforts to include them in these technological landscapes. In this thesis, we hope to draw attention to and take a step towards the inclusion of deaf and mute people in microtask crowdsourcing.
We identify some of the existing technical and research gaps in the current architectures for Sign Language Recognition/Translation in a real-time setting. Next, we determine various microtasks which can be adapted to use sign language as input, keeping in mind the challenges it introduces. We, then, investigate the effectiveness of a system that uses sign language as input by building a web application - SignUpCrowd - for microtask crowdsourcing, namely Visual Question Answering and Tweet Sentiment Analysis tasks, and comparing it with already prevalent input types such as text and click. This comparison with different popular input types will help understand how much of a difference there is for sign language as input. In addition, it will also show the preference of input types for the particular microtasks. For this, we developed three web applications with different input types and conducted a between-subject experimental study on Prolific wherein a number of workers (N=240) were asked to perform the above-mentioned tasks using sign language, text, and click input. Our results indicate that, in terms of task completion time and task accuracy, sign language as an input modality in microtask crowdsourcing is not significantly different from other, commonly used, input types. We also noticed that people's input type preference for the given microtasks for sign language was more than text input. Although people with no knowledge of sign language found it difficult, this input modality aims at a different target audience. This shows us that there is scope for sign language as an input type for microtask crowdsourcing among people, and paves the way for more efforts for the introduction of sign language in real-world applications.