Evaluating BERT-based Rewards for Question Generation with Reinforcement Learning

Conference Paper (2021)
Author(s)

P. Zhu (TU Delft - Web Information Systems)

C. Hauff (TU Delft - Web Information Systems)

Research Group
Web Information Systems
DOI related publication
https://doi.org/10.1145/3471158.3472240
More Info
expand_more
Publication Year
2021
Language
English
Research Group
Web Information Systems
Pages (from-to)
261-270
ISBN (electronic)
9781450386111

Abstract

Question generation systems aim to generate natural language questions that are relevant to a given piece of text, and can usually be answered by just considering this text. Prior works have identified a range of shortcomings (including semantic drift and exposure bias) and thus have turned to the reinforcement learning paradigm to improve the effectiveness of question generation. As part of it, different reward functions have been proposed. As typically these reward functions have been empirically investigated in different experimental settings (different datasets, models and parameters) we lack a common framework to fairly compare them. In this paper, we first categorize existing rewards systematically. We then provide such a fair empirical evaluation of different reward functions (including three we propose here for QG) in a common framework. We find rewards that model answerability to be the most effective.

No files available

Metadata only record. There are no files for this record.