Searched for: +
(1 - 3 of 3)
document
Zhu, P. (author), Wang, Z. (author), Yang, J. (author), Hauff, C. (author), Anand, A. (author)
Quality control is essential for creating extractive question answering (EQA) datasets via crowdsourcing. Aggregation across answers, i.e. word spans within passages annotated, by different crowd workers is one major focus for ensuring its quality. However, crowd workers cannot reach a consensus on a considerable portion of questions. We...
conference paper 2022
document
Chen, G. (author), Yang, J. (author), Hauff, C. (author), Houben, G.J.P.M. (author)
We present LearningQ, a challenging educational question generation dataset containing over 230K document-question pairs. It includes 7K instructor-designed questions assessing knowledge concepts being taught and 223K learner-generated questions seeking in-depth understanding of the taught concepts. We show that, compared to existing datasets...
conference paper 2018
document
Yang, J. (author), Hauff, C. (author), Bozzon, A. (author), Houben, G.J.P.M. (author)
Collaborative Question Answering (cQA) platforms are a very popular repository of crowd-generated knowledge. By formulating questions, users express needs that other members of the cQA community try to collaboratively satisfy. Poorly formulated questions are less likely to receive useful responses, thus hindering the overall knowledge generation...
conference paper 2014