A General Purpose Control Design For Vision Based Autonomous Quadrotor Navigation

More Info
expand_more

Abstract

Quadrotors  are unmanned aerial vehicles (UAVs) which are controlled by changing the angular speed of the four rotors. In the recent years, they have received a large  attention from the research community thanks to their  vertical take-off and landing capability, small dimension, payload, flight endurance and low price  which it has increased the interest in making them completely autonomous. In this thesis two different problems related to the autonomous quadrotor navigation field have been studied. The first one is how it is possible to drive the  quadrotor from its current pose (position and orientation) to the  desired one assuming that the quadrotor's full states measurements (position, orientation,velocity and acceleration) are available. The second problem deals with the use of visual informations either acquired by the front or bottom camera to place the quadrotor at a desired distance from a chosen static or moving object keeping the  latter centered in the acquired camera image without knowing  neither the quadrotor's pose nor the object one. To solve the first problem  a navigation controller framework able to communicate with both parrot AR. Drone 2.0 and Pixhawk autopilot has been  designed. A cascade control design made up by  seven 2DOF PID controllers divided into five different modules (horizontal position controller, vertical position controller, horizontal speed controller, vertical speed controller, yaw controller) has been developed to ensure that the navigation controller framework  is able to simultaneously track the desired yaw angle together with either the desired velocities or the positions.
To solve the second problem  a vision based planner made up by three different modules (perception, image state estimator, image based visual servo) has been developed. Planar ArUco markers have been used to avoid the need of designing different specific objects' detectors. Their corners have been extracted by the ArUco detector located inside the perception module. Inside the image state estimator module a Kalman filter with velocity constant model has been designed for both estimating the corners when a detection is suddenly not available and providing to the image based visual servo module feedbacks data (estimated corners) at a specific chosen frequency. An image based visual servo algorithm  has been implemented to compute the desired quadrotor's translational velocities that the navigation controller framework has to track to minimize the error between the estimated and the desired corners where the latter are computed as a function of the desired distance between the quadrotor and the visual marker.
To validate the navigation controller framework and the vision based planner a mission has been chosen in which firstly the quadrotor autonomously reaches a first predefined desired pose, secondly it  approaches up to a desired distance a static visual marker using the front camera image data, thirdly it reaches a  second predefined desired pose and finally it  autonomously lands using the bottom camera image information either on a static or moving visual marker. The mission has been tested in real flight with Parrot AR. Drone 2.0 and in simulation using Gazebo simulator in combination with PX4 Software-In-The-Loop.
Both the navigation controller framework and the vision based planner have been developed using ROS as a software framework to guarantee system communication and all the algorithms have been coded using C++ as a programming language.