Learning kinematic models using a single tele-demonstration

More Info
expand_more

Abstract

To successfully perform manipulation tasks in an unknown environment, a robot must be able to learn the kinematic constraints of the objects within this environment. Over the years, many studies have investigated the possibility to learn the kinematic models of articulated objects using a Learning from Demonstration (LfD) approach. In the majority of these studies the assumption is made that the robot solely manipulates these articulated objects. In reality, however, robots often manipulate free-space objects that generally do not encounter any constraints. As a result, a human has to manually confirm which of the observed demonstrations concern articulated objects and which do not. Furthermore, the majority of these studies do not evaluate the quality of the kinematic models prior to learning them. As a consequence, incorrect or uncertain models can be learned by the robot, which could lead to task failure or even dangerous behavior.

In this report, the novel Kinematic Model Learner (KML) framework is introduced, which aims to solve both of these problems using a multi-modal approach. In doing so, special attention is given to the understandability of the created framework, and its ability to adjust to different robot applications.

The KML framework consist of two separate frameworks called KMLtraj and KMLforce. After the demonstration is given, the KMLforce framework first uses the force data to determine whether the manipulated object is free-space or constrained. If the object is recognized to be free-space, it will be classified as such after which the corresponding kinematic model is learned. If the object is classified as constrained, the \traj{} framework uses the observed trajectory data to classify and learn the kinematic models of the constrained objects. In order to prevent the robot from learning incorrect or uncertain models, a probabilistic classifier is used which only learns a kinematic model if the confidence level is above a certain learning threshold.

The designed frameworks are experimentally validated by performing a total of 27 demonstrations on the care robot Marco using tele-operation. From these manipulations the trajectory and force data were used as inputs to validate each framework separately. Additionally, the KMLtraj framework is also evaluated using the Cody dataset, which contains the trajectories of 35 different manipulation tasks.

It has been concluded that the KML framework can robustly recognize and learn the kinematic models of the free-space and articulated objects. Moreover, a robustness analysis showed that the KML framework is more robust than the current state-of-the-art acrticulation package. Additionally, the KML framework is able to asses the quality of the learned models and can prevent the robot from learning incorrect or uncertain models. Finally, the framework can be easily adjusted to different robot applications as the effects of the tuning parameters are easy to understand and can be determined by assessing the robot applications or by performing simple experiments.