Generating Robotic Manipulation Trajectories with Neural Networks

More Info
expand_more

Abstract

In order to interact with environments and appliances made for humans, robots should be able to manipulate a large variety of objects and appliances in human environments. When having experience with manipulating a certain object or appliance, a robot should be able to generalize this behaviour to novel, but similar objects and appliances. When a human has to do a simple task like pushing a previously unseen button or rotating a knob, it has an idea of how to do this. Based on prior experience with similar object parts, a concept for this kind of manipulation tasks is formed. In this work we use a similar idea to learn robots to infer how an object or appliance should be manipulated. We make use of a neural network approach to generate manipulation trajectories for a robot. An instruction in natural language and a pointcloud of the 'manipulatable' object part are encoded into a compact feature representation. We use a recurrent neural network to generate a manipulation trajectory, conditioned on this learned feature representation. We report on experimental results with our model, first letting our recurrent neural network hallucinate manipulation trajectories. This shows that it has learned reasonable patterns. Then we compare the generated trajectories, conditioned on the learned feature representation, with the current state of the art. We show that for some simple tasks our model generates better trajectories, but in general does not have enough training data to generate reasonable trajectories for more challenging and complex tasks.