GGANet

Algorithm Unrolling for Water Distribution Networks Metamodelling

More Info
expand_more

Abstract

Water distribution networks (WDNs) provide drinking water to urban and rural consumers through a network of pipes that transport water from reservoirs to junctions. Water utilities rely on tools such as EPANET to simulate and analyse the performance of water distribution networks (WDNs). EPANET solves the flow continuity and headloss equations through pressurised piped networks under steady-state conditions. The EPANET solver applies the iterative Global Gradient Algorithm (GGA) of until convergence to determine unknown flows and pressures at the same time. Despite the widespread success of GGA, the speed of the algorithm may hinder several applications that require many simulations, especially for large networks.

To overcome these issues, researchers often resort to surrogate models, also known as metamodels, to significantly reduce the simulating times while approximating the behaviour of hydrodynamic models. Machine learning methods are increasingly being used as surrogate models for WDN analysis. Among these, the multi-layer perceptron (MLP) is the preferred metamodeling choice. MLPs provide a fast alternative to hydrodynamic models, but may lack the desired level of accuracy for complex case studies and applications. This is due to the lack of domain knowledge. Domain knowledge in WDNs can be divided into the topology of the network and the physical equations that govern the system. Recently, researchers have expanded MLPs to include domain knowledge in the form of the topology of the system. These new machine learning-based surrogates are called GNNs. However, GNNs present the problem that they lack speed compared to MLPs and still do not include information on the physical equations of the system.

For this reason, we propose applying algorithm unrolling to the GGA to include information on the physical equations that govern WDNs. Algorithm unrolling (AU) is a promising technique within the field of model-based deep learning that provides an alternative to traditional data-driven methods for approximating the output of iterative algorithms. This approach involves deconstructing the iterative algorithm used to solve the optimisation problem of interest into blocks that can be represented as layers in a neural network. Our architecture identifies two steps in the GGA and designs layers of a deep neural network following the steps of the GGA. In addition, the architecture includes information of the system features, those that remain constant throughout the steps of the GGA. We do so by adding residual connections to the variables. We evaluate the proposed metamodel by predicting the heads of five WDNs with varying characteristics and present an ablation study to understand the effect of the different components of our architecture. Our results show that our metamodel improved the accuracies with respect to MLPs and GNNs while being up to 2000 times faster than EPANET. We believe that the increase in accuracy with respect to the state-of-the-art baselines and the increase in speed with respect to EPANET justify the use of this metamodel for applications in the field of WDNs that rely on many simulations of EPANET.

Files