MAMBPO

Sample-efficient multi-robot reinforcement learning using learned world models

Conference Paper (2021)
Author(s)

Daniel Willemsen (Student TU Delft)

Mario Coppola (TU Delft - Control & Simulation)

Guido C.H.E.de de Croon (TU Delft - Control & Simulation)

Research Group
Control & Simulation
DOI related publication
https://doi.org/10.1109/IROS51168.2021.9635836
More Info
expand_more
Publication Year
2021
Language
English
Research Group
Control & Simulation
Pages (from-to)
5635-5640
ISBN (print)
978-1-6654-1715-0
ISBN (electronic)
978-1-6654-1714-3

Abstract

Multi-robot systems can benefit from reinforcement learning (RL) algorithms that learn behaviours in a small number of trials, a property known as sample efficiency. This research thus investigates the use of learned world models to improve sample efficiency. We present a novel multi-agent model-based RL algorithm: Multi-Agent Model-Based Policy Optimization (MAMBPO), utilizing the Centralized Learning for Decentralized Execution (CLDE) framework. CLDE algorithms allow a group of agents to act in a fully decentralized manner after training. This is a desirable property for many systems comprising of multiple robots. MAMBPO uses a learned world model to improve sample efficiency compared to model-free Multi-Agent Soft Actor-Critic (MASAC). We demonstrate this on two simulated multi-robot tasks, where MAMBPO achieves a similar performance to MASAC, but requires far fewer samples to do so. Through this, we take an important step towards making real-life learning for multi-robot systems possible.

No files available

Metadata only record. There are no files for this record.