Crowdsourcing and Human computation have enabled industry and
scientists to create innovative solutions by harnessing organised
collective human effort. In human computation platforms, it is
observed that workers spend large amount of time searching for
appropriate tasks due to l
...
Crowdsourcing and Human computation have enabled industry and
scientists to create innovative solutions by harnessing organised
collective human effort. In human computation platforms, it is
observed that workers spend large amount of time searching for
appropriate tasks due to lack of effective task discovery mechanism. This loss of time translates to loss of incentives for worker
and affects motivation to solve more tasks. Task recommendation
in human computation can not only help mitigating this problem,
but it can also result in high quality answers from motivated workers. While few works empirically proved the benefits of task recommendation in human computation platforms, we advocate for a
better scientific understanding of how worker- and task-modelling
can help to systematically achieve faster and higher-quality task executions. To this end, it is fundamental the availability of tools able
to “open the box” of human computation, and offer direct control
on worker and task properties, to be later used for recommendation. This paper presents BruteForce, a framework that simplifies experiments with commercial human computation platforms,
while offering task recommendation features based on rich (and
extensible) set of worker and task properties. We describe the characteristics of BruteForce, and we report on a set of preliminary
experiments with three user profiling techniques namely featureindependent, feature-based and Composite.
@en