Vision Assisted Motion Planning of Robotic Arm For Service Robots

More Info
expand_more

Abstract

In this thesis a vision assisted system is developed for manipulation of a robotic arm which is to be used in unconstrained environments with service robots. The vision module comprises of segmentation and object tracking that allows the user to select the object they want to grasp. It is shown that GrabCut segmentation improves the efficiency of the Tracking-Learning-Detection (TLD) tracker. The Moveit! platform for solving motion planning problems is used in this thesis. Apart from the default Open Motion Planning Library (OMPL) in Moveit!, Stochastic Trajectory Optimization for Motion Planing (STOMP) and Search-Based Planning Library (SBPL) have also been explored to solve motion planning problems. Inverse kinematics based genetic search is used for generation of waypoints in challenging manipulation tasks and has been incorporated into the Moveit! framework. The waypoints generated through genetic search have shown to be valid. Additionally an analysis is done to evaluate the performance of different motion planning libraries to find implementable solutions in cases of varying relative position to arm and clearances to nearby obstacles. It is shown that certain motion planning libraries have superior performance for varying clearance and position of goal. A contextual awareness module is developed that determines the best planning algorithm for the clearance from obstacles and relative position of the target pose to the arm. A flexible framework is created that incorporates the vision module, genetic search, contextual awareness and allows for switching between the three motion planning libraries. The system is also tested on the robotic arm at Robot Care Systems.