Searched for: contributor%3A%22Giua%2C+Alessandro+%28editor%29%22
(1 - 4 of 4)
document
Xu, J. (author), Busoniu, L (author), van den Boom, A.J.J. (author), De Schutter, B.H.K. (author)
This paper addresses the infinite-horizon optimal control problem for max-plus linear systems where the considered objective function is a sum of discounted stage costs over an infinite horizon. The minimization problem of the cost function is equivalently transformed into a maximization problem of a reward function. The resulting optimal...
conference paper 2016
document
Munk, J. (author), Kober, J. (author), Babuska, R. (author)
Deep Neural Networks (DNNs) can be used as function approximators in Reinforcement Learning (RL). One advantage of DNNs is that they can cope with large input dimensions. Instead of relying on feature engineering to lower the input dimension, DNNs can extract the features from raw observations. The drawback of this end-to-end learning is that it...
conference paper 2016
document
Alibekov, Eduard (author), Kubalìk, Jiřì (author), Babuska, R. (author)
This paper addresses the problem of deriving a policy from the value function in the context of reinforcement learning in continuous state and input spaces. We propose a novel method based on genetic programming to construct a symbolic function, which serves as a proxy to the value function and from which a continuous policy is derived. The...
conference paper 2016
document
Baldi, S. (author)
This work proposes an iterative procedure for static output feedback of polynomial systems based on Sum-of-Squares optimization. Necessary and sufficient conditions for static output feedback stabilization of polynomial systems are formulated, both for the global and for the local stabilization case. Since the proposed conditions are bilinear...
conference paper 2016
Searched for: contributor%3A%22Giua%2C+Alessandro+%28editor%29%22
(1 - 4 of 4)