Difference Rewards Policy Gradients

Conference Paper (2021)
Author(s)

Jacopo Castellini (University of Liverpool)

Frans Oliehoek (TU Delft - Interactive Intelligence)

Sam Devlin (Microsoft Research Cambridge)

Rahul Savani (University of Liverpool)

Research Group
Interactive Intelligence
Copyright
© 2021 Jacopo Castellini, F.A. Oliehoek, Sam Devlin, Rahul Savani
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Jacopo Castellini, F.A. Oliehoek, Sam Devlin, Rahul Savani
Research Group
Interactive Intelligence
Pages (from-to)
1463-1465
ISBN (electronic)
9781450383073
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Policy gradient methods have become one of the most popular classes of algorithms for multi-agent reinforcement learning. A key challenge, however, that is not addressed by many of these methods is multi-agent credit assignment: assessing an agent’s contribution to the overall performance, which is crucial for learning good policies. We propose a novel algorithm called Dr.Reinforce that explicitly tackles this by combining difference rewards with policy gradients to allow for learning decentralized policies when the reward function is known. By differencing the reward function directly, Dr.Reinforce avoids difficulties associated with learning the 푄-function as done by Counterfactual Multiagent Policy Gradients (COMA), a state-of-the-art difference rewards method. For applications where the reward function is unknown, we show the effectiveness of a version of Dr.Reinforce that learns a reward network that is used to estimate the difference rewards.

Files

P1475.pdf
(pdf | 1.54 Mb)
License info not available