The importance of experience replay database composition in deep reinforcement learning

Conference Paper (2015)
Author(s)

Tim de Bruin (TU Delft - OLD Intelligent Control & Robotics)

J. Kober (TU Delft - OLD Intelligent Control & Robotics)

K.P. Tuyls (TU Delft - OLD Intelligent Control & Robotics, University of Liverpool)

R. Babuska (TU Delft - OLD Intelligent Control & Robotics)

Research Group
OLD Intelligent Control & Robotics
More Info
expand_more
Publication Year
2015
Language
English
Related content
Research Group
OLD Intelligent Control & Robotics

Abstract

Recent years have seen a growing interest in the use of deep neural networks as
function approximators in reinforcement learning. This paper investigates the potential of the Deep Deterministic Policy Gradient method for a robot control problem both in simulation and in a real setup. The importance of the size and composition of the experience replay database is investigated and some requirements on the distribution over the state-action space of the experiences in the database are identified. Of particular interest is the importance of negative experiences that are not close to an optimal policy. It is shown how training with samples that are insufficiently spread over the state-action space can cause the method to fail, and how maintaining the distribution over the state-action space of the samples in the experience database can greatly benefit learning.

No files available

Metadata only record. There are no files for this record.