Deep Reinforcement Learning for Rapid Communication Network Recovery with Multiple Repair Crews

Master Thesis (2021)
Author(s)

I. Vilhjálmsson (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

J. Oostenbrink – Mentor (TU Delft - Embedded Systems)

Fernando A. Kuipers – Graduation committee member (TU Delft - Embedded Systems)

J.A. Pouwelse – Graduation committee member (TU Delft - Data-Intensive Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2021 Ingimundur Vilhjálmsson
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Ingimundur Vilhjálmsson
Graduation Date
27-09-2021
Awarding Institution
Delft University of Technology
Programme
['Electrical Engineering | Embedded Systems']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Natural disasters can destroy communication network components, potentially leading to severe losses in connectivity. During those devastating events, network connectivity is crucial for rescue teams as well as anyone in need of assistance. Therefore, swift network restoration following a disaster is vital. However, post-disaster network recovery efforts have been proven to be too slow in the past.

Rapidly deployable networks (RDN) are communication networks that can be configured as a wireless mesh network and can be integrated into an existing communication network. As the name suggests, RDNs have a quick setup time and are highly transportable. The technologies behind RDNs for communication networks have received considerable advancements in recent times. Nonetheless, the deployment strategy of such a network remains open.

The existing solutions on rapid post-disaster network recovery are built in an inflexible way. First, each of them is designed around a specific problem. Making slight modifications to the problem greatly increases the complexity of the algorithm and can require major design changes to the system. Second, the proposed solutions are unable to adapt to unexpected circumstances, such as repair times taking longer than anticipated. We propose an online network recovery approach to solve these flexibility issues.

With the optimization objective of maximizing a network's weighted connectivity while minimizing the overall recovery process duration, we design a Deep Reinforcement Learning (DRL) system to produce optimal RDN deployment decisions. Experiments show our Deep Q-network (DQN) algorithm outperform greedy and naive approaches on any disaster scenario.

Files

MSc_Thesis_Ingimundur.pdf
(pdf | 3.24 Mb)
License info not available