Decentralized reinforcement learning applied to mobile robots

Conference Paper (2017)
Author(s)

David L. Leottau (Universidad de Santiago de Chile)

Aashish Vatsyayan (Student TU Delft)

Javier Ruiz-Del-Solar (Universidad de Santiago de Chile)

R. Babuska (TU Delft - Learning & Autonomous Control)

Research Group
Learning & Autonomous Control
DOI related publication
https://doi.org/10.1007/978-3-319-68792-6_31
More Info
expand_more
Publication Year
2017
Language
English
Research Group
Learning & Autonomous Control
Volume number
9776 LNAI
Pages (from-to)
368-379
ISBN (print)
978-3-319-68791-9
ISBN (electronic)
978-3-319-68792-6

Abstract

In this paper, decentralized reinforcement learning is applied to a control problem with a multidimensional action space. We propose a decentralized reinforcement learning architecture for a mobile robot, where the individual components of the commanded velocity vector are learned in parallel by separate agents. We empirically demonstrate that the decentralized architecture outperforms its centralized counterpart in terms of the learning time, while using less computational resources. The method is validated on two problems: an extended version of the 3-dimensional mountain car, and a ball-pushing behavior performed with a differential-drive robot, which is also tested on a physical setup.

No files available

Metadata only record. There are no files for this record.