Robotic Grasping of Deformable Food Objects

A Human-Inspired Reinforcement Learning Approach

More Info
expand_more

Abstract

There are many stages that involve humans handling food objects in the processing chains from farms to stores. For some of these tasks it is desirable to look for a robotic solution to either assist the human or even take over that task, e.g. if it is physically demanding, imposes contamination risks or because of economical considerations. Moreover, recently the COVID-19 pandemic revealed even more vulnerabilities in our food processing chains, when seasonal labourers where blocked at borders and some food processing sites turned out to be "corona hotspots" with the result that whole food processing chains were disturbed. A step towards solving some of these problems is studying robotic grasping, since it is a crucial skill for many manipulation tasks. This work focuses on force closure grasps for pick-and-place tasks. Since humans grasp novel objects effortlessly, a human inspired approach is proposed that combines visual and tactile sensory information and learns from its mistakes thanks to reinforcement learning. Visual features obtained from an RGB-D camera in combination with pressure information from sensors on the finger tips of the robotic hand are used to set the desired grasp force adaptively while holding the object. A novel algorithm is proposed for learning to grasp with minimal force: LIFT (Learning of Initial Force and Tuning). The novel grasp approach is evaluated in a Gazebo simulation environment and on a real-world robotic setup. The approach results in successful grasping in both simulation and on the real-world setup in tens or hundreds of learning interactions, depending on the size of the state-action space. Success rates of 96.3 % and 96.7 % are obtained in simulation and on the real-world setup, respectively. The results from the experiments indicate that the approach is successful in grasping with minimal force.