Searched for: subject%3A%22Control%22
(1 - 12 of 12)
document
Wu, D. (author), Zhang, R. (author), Pore, Ameya (author), Ha, Xuan Thao (author), Li, Z. (author), Herrera, Fernando (author), Kowalczyk, Wojtek (author), De Momi, Elena (author), Dankelman, J. (author), Kober, J. (author)
Minimally Invasive Procedures (MIPs) emerged as an alternative to more invasive surgical approaches, offering patient benefits such as smaller incisions, less pain, and shorter hospital stay. In one class of MIPs, where natural body lumens or small incisions are used to access deeper anatomical locations, Flexible Surgical and Interventional...
review 2024
document
Ding, J. (author), Sels, Mees A.van Loben (author), Angelini, Franco (author), Kober, J. (author), Della Santina, C. (author)
Quadrupeds deployed in real-world scenarios need to be robust to unmodelled dynamic effects. In this work, we aim to increase the robustness of quadrupedal periodic forward jumping (i.e., pronking) by unifying cutting-edge model-based trajectory optimization and iterative learning control. Using a reduced-order soft anchor model, the...
journal article 2023
document
Pérez-Dattari, Rodrigo (author), Ferreira de Brito, B.F. (author), de Groot, O.M. (author), Kober, J. (author), Alonso-Mora, J. (author)
The successful integration of autonomous robots in real-world environments strongly depends on their ability to reason from context and take socially acceptable actions. Current autonomous navigation systems mainly rely on geometric information and hard-coded rules to induce safe and socially compliant behaviors. Yet, in unstructured urban...
journal article 2022
document
Mészáros, A. (author), Franzese, G. (author), Kober, J. (author)
This work investigates how the intricate task of a continuous pick & place (P&P) motion may be learned from humans based on demonstrations and corrections. Due to the complexity of the task, these demonstrations are often slow and even slightly flawed, particularly at moments when multiple aspects (i.e., end-effector movement,...
journal article 2022
document
van der Heijden, D.S. (author), Ferranti, L. (author), Kober, J. (author), Babuska, R. (author)
This paper presents DeepKoCo, a novel modelbased agent that learns a latent Koopman representation from images. This representation allows DeepKoCo to plan efficiently using linear control methods, such as linear model predictive control. Compared to traditional agents, DeepKoCo learns taskrelevant dynamics, thanks to the use of a tailored lossy...
conference paper 2021
document
de Bruin, T.D. (author), Kober, J. (author), Tuyls, Karl (author), Babuska, R. (author)
Deep reinforcement learning makes it possible to train control policies that map high-dimensional observations to actions. These methods typically use gradient-based optimization techniques to enable relatively efficient learning, but are notoriously sensitive to hyperparameter choices and do not have good convergence properties. Gradient...
journal article 2020
document
Pane, Yudha P. (author), Nageshrao, Subramanya P. (author), Kober, J. (author), Babuska, R. (author)
Smart robotics will be a core feature while migrating from Industry 3.0 (i.e., mass manufacturing) to Industry 4.0 (i.e., customized or social manufacturing). A key characteristic of a smart system is its ability to learn. For smart manufacturing, this means incorporating learning capabilities into the current fixed, repetitive, task-oriented...
journal article 2019
document
Buşoniu, Lucian (author), de Bruin, T.D. (author), Tolić, Domagoj (author), Kober, J. (author), Palunko, Ivana (author)
Reinforcement learning (RL) offers powerful algorithms to search for optimal controllers of systems with nonlinear, possibly stochastic dynamics that are unknown or highly uncertain. This review mainly covers artificial-intelligence approaches to RL, from the viewpoint of the control engineer. We explain how approximate representations of the...
review 2018
document
de Bruin, T.D. (author), Kober, J. (author), Tuyls, K.P. (author), Babuska, R. (author)
Experience replay is a technique that allows off-policy reinforcement-learning methods to reuse past experiences. The stability and speed of convergence of reinforcement learning, as well as the eventual performance of the learned policy, are strongly dependent on the experiences being replayed. Which experiences are replayed depends on two...
journal article 2018
document
Manschitz, Simon (author), Gienger, Michael (author), Kober, J. (author), Peters, Jan (author)
In this letter, we introduce Mixture of Attractors, a novel movement primitive representation that allows for learning complex object-relative movements. The movement primitive representation inherently supports multiple coordinate frames, enabling the system to generalize a skill to unseen object positions and orientations. In contrast to...
journal article 2018
document
Feirstein (student), D.S. (author), Koryakovskiy, I. (author), Kober, J. (author), Vallery, H. (author)
Reinforcement learning is a powerful tool to derive controllers for systems where no models are available. Particularly policy search algorithms are suitable for complex systems, to keep learning time manageable and account for continuous state and action spaces. However, these algorithms demand more insight into the system to choose a...
conference paper 2016
document
de Bruin, T.D. (author), Kober, J. (author), Tuyls, K.P. (author), Babuska, R. (author)
Recent years have seen a growing interest in the use of deep neural networks as function approximators in reinforcement learning. In this paper, an experience replay method is proposed that ensures that the distribution of the experiences used for training is between that of the policy and a uniform distribution. Through experiments on a...
conference paper 2016
Searched for: subject%3A%22Control%22
(1 - 12 of 12)