Conflicting demonstrations in Inverse Reinforcement Learning
R.M. Labbé (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Luciano C. Cavalcante Siebert – Mentor (TU Delft - Interactive Intelligence)
A. Caregnato Neto – Mentor (TU Delft - Interactive Intelligence)
J.M. Weber – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This paper aims to investigate the effect of conflicting demonstrations on Inverse Reinforcement Learning (IRL). IRL is a method to understand the intent of an expert, by only feeding it demonstrations of that expert, which may be a promising approach for areas such as self driving vehicles, where there are a lot of demonstrations from experts. This paper aims to investigate the effect of conflicting demonstrations on IRL. Demonstrations may not always come from the same expert or the expert may prioritize different goals at times. For example, a driver may not always do grocery shopping at the same store or they may take a slightly different route on different occasions. The results showcase a negative effect from severely conflicting demonstrations on the ability of Max Entropy IRL to recover rewards, but do show some slightly optimistic results on more than two goals.