Suction Grasp Pose Planning Using Self-supervision and Transfer Learning

More Info
expand_more

Abstract

Planning grasp poses for a robot on unknown objects in cluttered environments is still an open problem. Recent research suggests that deep learning technique is a promising approach to plan grasp poses on unknown objects in cluttered environments. In this field, three types of data are used for training: (a) human labeled data; (b) synthetic data; (c) real robot data. Each of them has different properties in terms of the cost of collection and label accuracy. Recent approaches solely use a single type of data to train a model. The problem of such a methodology is that human labeled data is inaccurate and costly, synthetic data is scalable but inaccurate, and real robot data is accurate but costly. In this paper, we use the method of combining synthetic data and real robot data to train a Grasp Quality Convolution Neural Network (GQ-CNN). We collect a real robot dataset of 10000 datapoints without human-annotation, by running a UR5 equipped with a pneumatic suction gripper, with an algorithmic supervisor. We use this dataset to fine-tune a GQ-CNN model. We evaluate models both by classifying collected data and running physical robot grasping experiments. We use 50 unknown objects with prismatic and complex shapes for testing. Our method achieves 100% grasp success rate on these objects, and results suggest that the fine-tuned model learns the diameter and the great suction force of the suction cup.