Influence-aware memory architectures for deep reinforcement learning in POMDPs

Journal Article (2022)
Author(s)

M. Suau de Castro (TU Delft - Interactive Intelligence)

J. He (TU Delft - Interactive Intelligence)

E. Congeduti (TU Delft - Interactive Intelligence)

Rolf Starre (TU Delft - Interactive Intelligence)

A.T. Czechowski (TU Delft - Interactive Intelligence)

Frans A Oliehoek (TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
Copyright
© 2022 M. Suau, J. He, E. Congeduti, R.A.N. Starre, A.T. Czechowski, F.A. Oliehoek
DOI related publication
https://doi.org/10.1007/s00521-022-07691-7
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 M. Suau, J. He, E. Congeduti, R.A.N. Starre, A.T. Czechowski, F.A. Oliehoek
Research Group
Interactive Intelligence
Issue number
19
Volume number
37
Pages (from-to)
13145-13161
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Due to its perceptual limitations, an agent may have too little information about the environment to act optimally. In such cases, it is important to keep track of the action-observation history to uncover hidden state information. Recent deep reinforcement learning methods use recurrent neural networks (RNN) to memorize past observations. However, these models are expensive to train and have convergence difficulties, especially when dealing with high dimensional data. In this paper, we propose influence-aware memory, a theoretically inspired memory architecture that alleviates the training difficulties by restricting the input of the recurrent layers to those variables that influence the hidden state information. Moreover, as opposed to standard RNNs, in which every piece of information used for estimating Q values is inevitably fed back into the network for the next prediction, our model allows information to flow without being necessarily stored in the RNN’s internal memory. Results indicate that, by letting the recurrent layers focus on a small fraction of the observation variables while processing the rest of the information with a feedforward neural network, we can outperform standard recurrent architectures both in training speed and policy performance. This approach also reduces runtime and obtains better scores than methods that stack multiple observations to remove partial observability.