Print Email Facebook Twitter High-Level Command Mapping for Multi-Robot Aerial Cinematography Title High-Level Command Mapping for Multi-Robot Aerial Cinematography Author Durrant, Robert (TU Delft Mechanical, Maritime and Materials Engineering; TU Delft Cognitive Robotics) Contributor Alonso Mora, Javier (mentor) Degree granting institution Delft University of Technology Programme Mechanical Engineering | Systems and Control Date 2018-12-05 Abstract Aerial cinematography has seen an increased use of Unmanned Aerial Vehicle (UAV) due to technological advancements and commercialisation in recent years. The operation of such a robot can be complex and requires a dedicated person to control it. Automation of the cinematography allows for the use of multiple robots, which further increases the complexity of performing cinematography. High-level command interpretation is required to allow for an intuitive interface suited for an inexperienced user to control such a system.Natural Language (NL) is an intuitive interface method which allows a user to specify a extensive range of commands. A Cinematographic Description Clause (CDC) is defined to extract information from a processed NL command. A minimum input approach is considered such that a user has to merely specify the number of robots and the people to record, whereby the specification of a behaviour is optional. An environment is considered in which up to three robots have to frame two people. Taking into account their orientation, relative global location and the user command, a set of behaviours can be determined based on cinematographic practices. Camera views and image parameters are determined through behaviour specific non-linear optimisations and assigned to the robots using a Linear-Bottleneck Algorithm (LBA). A collision-free global path is computed for each robot with an A* search algorithm. Finally, a Model Predictive Control (MPC) determines low-level inputs such that the user command can be achieved.Three situations are considered to validate the performance of the system given the minimal user input. First, tracking of the dynamic orientations of the people is evaluated for up to three robots, whereby camera positions are determined autonomously. Next, dynamic motions of the two people through an environment highlight the limitations of the system due to collision mitigation, mutual visibility and robot dynamics. An extension to multiple simultaneous commands increases the quantity of robots and people that can be tracked. This allows for an assessment of the flexibility and scalability of the proposed high-level command interpretation methodology. Subject Aerial CinematographyMPCCommand Mapping To reference this document use: http://resolver.tudelft.nl/uuid:a8ca315c-93b4-4b6c-83fd-f587f5aed34c Part of collection Student theses Document type master thesis Rights © 2018 Robert Durrant Files PDF Thesis_Durrant4204530.pdf 6.07 MB Close viewer /islandora/object/uuid:a8ca315c-93b4-4b6c-83fd-f587f5aed34c/datastream/OBJ/view