Searched for: subject%3A%22artificial%255C%252Bintelligence%22
(1 - 4 of 4)
document
Ruelens, F (author), Claessens, BJ (author), Vandael, S (author), De Schutter, B.H.K. (author), Babuska, R. (author), Belmans, R (author)
Driven by recent advances in batch Reinforcement Learning (RL), this paper contributes to the application of batch RL to demand response. In contrast to conventional model-based approaches, batch RL techniques do not require a system identification step, making them more suitable for a large-scale implementation. This paper extends fitted Q...
journal article 2017
document
de Bruin, T.D. (author), Kober, J. (author), Tuyls, K.P. (author), Babuska, R. (author)
Recent years have seen a growing interest in the use of deep neural networks as function approximators in reinforcement learning. In this paper, an experience replay method is proposed that ensures that the distribution of the experiences used for training is between that of the policy and a uniform distribution. Through experiments on a...
conference paper 2016
document
Munk, J. (author), Kober, J. (author), Babuska, R. (author)
Deep Neural Networks (DNNs) can be used as function approximators in Reinforcement Learning (RL). One advantage of DNNs is that they can cope with large input dimensions. Instead of relying on feature engineering to lower the input dimension, DNNs can extract the features from raw observations. The drawback of this end-to-end learning is that it...
conference paper 2016
document
Alibekov, Eduard (author), Kubalìk, Jiřì (author), Babuska, R. (author)
This paper addresses the problem of deriving a policy from the value function in the context of reinforcement learning in continuous state and input spaces. We propose a novel method based on genetic programming to construct a symbolic function, which serves as a proxy to the value function and from which a continuous policy is derived. The...
conference paper 2016
Searched for: subject%3A%22artificial%255C%252Bintelligence%22
(1 - 4 of 4)