Print Email Facebook Twitter Efficient exploration with Double Uncertain Value Networks Title Efficient exploration with Double Uncertain Value Networks Author Moerland, T.M. (TU Delft Interactive Intelligence) Broekens, D.J. (TU Delft Interactive Intelligence) Jonker, C.M. (TU Delft Interactive Intelligence) Date 2017 Abstract This paper studies directed exploration for reinforcement learning agents by tracking uncertainty about the value of each available action. We identify two sources of uncertainty that are relevant for exploration. The first originates from limited data (parametric uncertainty), while the second originates from the distribution of the returns (return uncertainty). We identify methods to learn these distributions with deep neural networks, where we estimate parametric uncertainty with Bayesian drop-out, while return uncertainty is propagated through the Bellman equation as a Gaussian distribution. Then, we identify that both can be jointly estimated in one network, which we call the Double Uncertain Value Network. The policy is directly derived from the learned distributions based on Thompson sampling. Experimental results show that both types of uncertainty may vastly improve learning in domains with a strong exploration challenge. To reference this document use: http://resolver.tudelft.nl/uuid:615d6642-d375-4f61-b1aa-6d69c9160bbb Source Deep Reinforcement Learning Symposium, NIPS 2017 Event NIPS 2017, 2017-12-07, Long Beach, United States Part of collection Institutional Repository Document type conference paper Rights © 2017 T.M. Moerland, D.J. Broekens, C.M. Jonker Files PDF MoerlandBroekensJonker_Ef ... and_1_.pdf 1.92 MB Close viewer /islandora/object/uuid:615d6642-d375-4f61-b1aa-6d69c9160bbb/datastream/OBJ/view