Searched for: author%3A%22Kubal%C3%ACk%2C+Ji%C5%99%C3%AC%22
(1 - 11 of 11)
document
Vastl, Martin (author), Kulhanek, Jonas (author), Kubalik, Jiri (author), Derner, Erik (author), Babuska, R. (author)
Many real-world systems can be naturally described by mathematical formulas. The task of automatically constructing formulas to fit observed data is called symbolic regression. Evolutionary methods such as genetic programming have been commonly used to solve symbolic regression tasks, but they have significant drawbacks, such as high...
journal article 2024
document
Kubalik, Jiri (author), Derner, Erik (author), Babuska, R. (author)
Many real-world systems can be described by mathematical models that are human-comprehensible, easy to analyze and help explain the system's behavior. Symbolic regression is a method that can automatically generate such models from data. Historically, symbolic regression has been predominantly realized by genetic programming, a method that...
journal article 2023
document
Kubalík, Jiří (author), Derner, Erik (author), Babuska, R. (author)
Virtually all dynamic system control methods benefit from the availability of an accurate mathematical model of the system. This includes also methods like reinforcement learning, which can be vastly sped up and made safer by using a dynamic system model. However, obtaining a sufficient amount of informative data for constructing dynamic...
journal article 2021
document
Kubalik, Jiri (author), Derner, Erik (author), Zegklitz, Jan (author), Babuska, R. (author)
Reinforcement learning algorithms can solve dynamic decision-making and optimal control problems. With continuous-valued state and input variables, reinforcement learning algorithms must rely on function approximators to represent the value function and policy mappings. Commonly used numerical approximators, such as neural networks or basis...
journal article 2021
document
Derner, Erik (author), Kubalik, Jiri (author), Babuska, R. (author)
Continual model learning for nonlinear dynamic systems, such as autonomous robots, presents several challenges. First, it tends to be computationally expensive as the amount of data collected by the robot quickly grows in time. Second, the model accuracy is impaired when data from repetitive motions prevail in the training set and outweigh...
journal article 2021
document
Derner, Erik (author), Kubalík, Jiří (author), Ancona, N. (author), Babuska, R. (author)
Developing mathematical models of dynamic systems is central to many disciplines of engineering and science. Models facilitate simulations, analysis of the system's behavior, decision making and design of automatic control algorithms. Even inherently model-free control techniques such as reinforcement learning (RL) have been shown to benefit...
journal article 2020
document
Alibekov, Eduard (author), Kubalík, Jiří (author), Babuska, R. (author)
Approximate Reinforcement Learning (RL) is a method to solve sequential decisionmaking and dynamic control problems in an optimal way. This paper addresses RL for continuous state spaces which derive the control policy by using an approximate value function (V-function). The standard approach to derive a policy through the V-function is...
journal article 2019
document
Alibekov, Eduard (author), Kubalik, Jiri (author), Babuska, R. (author)
This paper addresses the problem of deriving a policy from the value function in the context of critic-only reinforcement learning (RL) in continuous state and action spaces. With continuous-valued states, RL algorithms have to rely on a numerical approximator to represent the value function. Numerical approximation due to its nature virtually...
journal article 2018
document
Kubalík, Jiří (author), Alibekov, Eduard (author), Babuska, R. (author)
Model-based reinforcement learning (RL) algorithms can be used to derive optimal control laws for nonlinear dynamic systems. With continuous-valued state and input variables, RL algorithms have to rely on function approximators to represent the value function and policy mappings. This paper addresses the problem of finding a smooth policy...
journal article 2017
document
Alibekov, Eduard (author), Kubalìk, Jiřì (author), Babuska, R. (author)
This paper addresses the problem of deriving a policy from the value function in the context of reinforcement learning in continuous state and input spaces. We propose a novel method based on genetic programming to construct a symbolic function, which serves as a proxy to the value function and from which a continuous policy is derived. The...
conference paper 2016
document
Kubalìk, Jiřì (author), Alibekov, Eduard (author), Žegklitz, Jan (author), Babuska, R. (author)
This paper presents a first step of our research on designing an effective and efficient GP-based method for symbolic regression. First, we propose three extensions of the standard Single Node GP, namely (1) a selection strategy for choosing nodes to be mutated based on depth and performance of the nodes, (2) operators for placing a compact...
conference paper 2016
Searched for: author%3A%22Kubal%C3%ACk%2C+Ji%C5%99%C3%AC%22
(1 - 11 of 11)