Searched for: author%3A%22Celemin%2C+Carlos%22
(1 - 9 of 9)
document
Celemin, Carlos (author), Kober, J. (author)
In order to deploy robots that could be adapted by non-expert users, interactive imitation learning (IIL) methods must be flexible regarding the interaction preferences of the teacher and avoid assumptions of perfect teachers (oracles), while considering they make mistakes influenced by diverse human factors. In this work, we propose an IIL...
journal article 2023
document
Celemin, Carlos (author), Kober, J. (author)
conference paper 2021
document
Scholten, Jan (author), Wout, Daan (author), Celemin, Carlos (author), Kober, J. (author)
Deep Reinforcement Learning has enabled the control of increasingly complex and high-dimensional problems. However, the need of vast amounts of data before reasonable performance is attained prevents its widespread application. We employ binary corrective feedback as a general and intuitive manner to incorporate human intuition and domain...
conference paper 2020
document
Pérez-Dattari, Rodrigo (author), Celemin, Carlos (author), Franzese, G. (author), Ruiz-del-Solar, Javier (author), Kober, J. (author)
Current ongoing industry revolution demands more flexible products, including robots in household environments and medium-scale factories. Such robots should be able to adapt to new conditions and environments and be programmed with ease. As an example, let us suppose that there are robot manipulators working on an industrial production line and...
journal article 2020
document
Pérez-Dattari, Rodrigo (author), Celemin, Carlos (author), Ruiz-del-Solar, Javier (author), Kober, J. (author)
Deep Reinforcement Learning (DRL) has become a powerful strategy to solve complex decision making problems based on Deep Neural Networks (DNNs). However, it is highly data demanding, so unfeasible in physical systems for most applications. In this work, we approach an alternative Interactive Machine Learning (IML) strategy for training DNN...
conference paper 2020
document
Celemin, Carlos (author), Maeda, Guilherme (author), Ruiz-del-Solar, Javier (author), Peters, Jan (author), Kober, J. (author)
Robot learning problems are limited by physical constraints, which make learning successful policies for complex motor skills on real systems unfeasible. Some reinforcement learning methods, like Policy Search, offer stable convergence toward locally optimal solutions, whereas interactive machine learning or learning-from-demonstration methods...
journal article 2019
document
Celemin, Carlos (author), Kober, J. (author)
Some imitation learning approaches rely on Inverse Reinforcement Learning (IRL) methods, to decode and generalize implicit goals given by expert demonstrations. The study of IRL normally has the assumption of available expert demonstrations, which is not always possible. There are Machine Learning methods that allow non-expert teachers to...
conference paper 2019
document
Pérez-Dattari, Rodrigo (author), Celemin, Carlos (author), Ruiz-Del-Solar, Javier (author), Kober, J. (author)
Deep Reinforcement Learning (DRL) has become a powerful methodology to solve complex decision-making problems. However, DRL has several limitations when used in real-world problems (e.g., robotics applications). For instance, long training times are required and cannot be accelerated in contrast to simulated environments, and reward functions...
conference paper 2019
document
Celemin, Carlos (author), Ruiz-del-Solar, Javier (author), Kober, J. (author)
Reinforcement Learning agents can be supported by feedback from human teachers in the learning loop that guides the learning process. In this work we propose two hybrid strategies of Policy Search Reinforcement Learning and Interactive Machine Learning that benefit from both sources of information, the cost function and the human corrective...
journal article 2018
Searched for: author%3A%22Celemin%2C+Carlos%22
(1 - 9 of 9)