Deep Reinforcement Learning for Rapid Communication Network Recovery with Multiple Repair Crews

More Info
expand_more

Abstract

Natural disasters can destroy communication network components, potentially leading to severe losses in connectivity. During those devastating events, network connectivity is crucial for rescue teams as well as anyone in need of assistance. Therefore, swift network restoration following a disaster is vital. However, post-disaster network recovery efforts have been proven to be too slow in the past.

Rapidly deployable networks (RDN) are communication networks that can be configured as a wireless mesh network and can be integrated into an existing communication network. As the name suggests, RDNs have a quick setup time and are highly transportable. The technologies behind RDNs for communication networks have received considerable advancements in recent times. Nonetheless, the deployment strategy of such a network remains open.

The existing solutions on rapid post-disaster network recovery are built in an inflexible way. First, each of them is designed around a specific problem. Making slight modifications to the problem greatly increases the complexity of the algorithm and can require major design changes to the system. Second, the proposed solutions are unable to adapt to unexpected circumstances, such as repair times taking longer than anticipated. We propose an online network recovery approach to solve these flexibility issues.

With the optimization objective of maximizing a network's weighted connectivity while minimizing the overall recovery process duration, we design a Deep Reinforcement Learning (DRL) system to produce optimal RDN deployment decisions. Experiments show our Deep Q-network (DQN) algorithm outperform greedy and naive approaches on any disaster scenario.