A large-scale study of agents learning from human reward (Extended abstract)

More Info
expand_more

Abstract

The TAMER framework, which provides a way for agents to learn to solve tasks using human-generated rewards, has been examined in several small-scale studies, each with a few dozen subjects. In this paper, we present the results of the first large-scale study of TAMER, which was performed at the NEMO science museum in Amsterdam and involved 561 subjects. Our results show for the first time that an agent using TAMER can successfully learn to play Infinite Mario, a challenging reinforcement-learning benchmark problem based on the popular video game, given feedback from both adult (N = 209) and child (N = 352) trainers. In addition, our study supports prior studies demonstrating the importance of bidirectional feedback and competitive elements in the training interface. Finally, our results also shed light on the potential for using trainers’ facial expressions as a reward signal, as well as the role of age and gender in trainer behavior and agent performance.

Files

328181.pdf
(pdf | 0.241 Mb)
Unknown license