Learning Task-Parameterized Skills From Few Demonstrations

Journal Article (2022)
Author(s)

J. Zhu (TU Delft - Learning & Autonomous Control)

Michael Gienger (Honda Research Institute Europe)

J. Kober (TU Delft - Learning & Autonomous Control)

Research Group
Learning & Autonomous Control
Copyright
© 2022 J. Zhu, Michael Gienger, J. Kober
DOI related publication
https://doi.org/10.1109/LRA.2022.3150013
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 J. Zhu, Michael Gienger, J. Kober
Research Group
Learning & Autonomous Control
Issue number
2
Volume number
7
Pages (from-to)
4063-4070
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Moving away from repetitive tasks, robots nowadays demand versatile skills that adapt to different situations. Task-parameterized learning improves the generalization of motion policies by encoding relevant contextual information in the task parameters, hence enabling flexible task executions. However, training such a policy often requires collecting multiple demonstrations in different situations. To comprehensively create different situations is non-trivial thus renders the method less applicable to real-world problems. Therefore, training with fewer demonstrations/situations is desirable. This paper presents a novel concept to augment the original training dataset with synthetic data for policy improvements, thus allows learning task-parameterized skills with few demonstrations.

Files

Learning_Task_Parameterized_Sk... (pdf)
(pdf | 1.68 Mb)
- Embargo expired in 11-08-2022
License info not available