Automatic Classification of Unmanned Aerial Vehicles with Radars On-The-Move

More Info
expand_more

Abstract

Drone detection and tracking systems are nowadays a requirement in most public, private and political events, because of the increasing risk of unintentional or malicious misuse of these platforms. Moreover, in order to ensure adequate protection, full spatial coverage is a must for every such system. However, the research literature focuses on staring radars that have a limited field of view, but which yield rich target information via time-frequency distributions that facilitate the target recognition task. In this thesis, surveillance radars that offer full spatial coverage are presented, albeit their usage for classification is made more complex because of the rotating nature of their antennas which limits the dwell time on targets.

Additionally, due to the incredible fast growth of the drone market, novel counter-drone radars that are able to jointly localize and classify small targets while on-the-move now represent a highly in-demand remote sensing system. Nonetheless, surveillance sensors anchored on moving vehicles are a brand-new technology that is currently being developed. This work therefore investigates surveillance systems in a novel scenario, and presents the technological challenges alongside the proposed solutions to achieve reliable object detection via grounded counter-drone radars on-the-move. Specifically, the required pre-processing steps to remove the clutter from the data while the radar is rotating and moving on the ground are developed and discussed.

In the end, the joint detection and classification problem is traditionally solved separately by different algorithms due to the computational complexity of the task. This thesis project presents a novel framework that localizes and labels drones in an unified pipeline under the umbrella of object detection via computer vision, and that is able to operate while being static or on-the-move. Thus, an end-to-end radar data processing architecture that is robust against homogeneity constraints and based on You Only Look Once (YOLO) model is used to perform object detection in real-time. In brief, this work opens new avenues towards multi-class and multi-instance plot-based target detection and classification by transferring cross-disciplinary algorithms from computer vision into remote sensing.