Prioritized Experience Replay based on the Wasserstein Metric in Deep Reinforcement Learning

The regularizing effect of modelling return distributions

More Info
expand_more

Abstract

This thesis tests the hypothesis that distributional deep reinforcement learning (RL) algorithms get an increased performance over expectation based deep RL because of the regularizing effect of fitting a more complex model. This hypothesis was tested by comparing two variations of the distributional QR-DQN algorithm combined with prioritized experience replay. The first variation, called QR-W, prioritizes learning the return distributions. The second one, QR-TD, prioritizes learning the Q-Values. These algorithms were be tested with a range of network architectures. From too large architectures which are prone to overfitting, to smaller ones prone to underfitting. To verify the findings the experiment was done in two environments. As hypothesised, QR-W performed better on the networks prone to overfitting, and QR-TD performed better on networks prone to underfitting. This suggests that fitting distributions has a regularizing effect, which at least partially explains the performance of distributional algorithms. To compare QR-TD and QR-W to conventional benchmarks from literature they were tested in the Enduro environment from the arcade learning environment proposed by Bellemare. QR-W outperformed the state-of-the-art algorithms IQN and Rainbow in a quarter of the training time.

Files