Attentional Graph Neural Network Is All You Need for Robust Massive Network Localization

Review (2025)
Author(s)

Wenzhong Yan (The Chinese University of Hong Kong, Shenzhen)

Feng Yin (The Chinese University of Hong Kong, Shenzhen)

Juntao Wang (The Chinese University of Hong Kong, Shenzhen)

Geert Leus (TU Delft - Signal Processing Systems)

Abdelhak M. Zoubir (Technische Universität Darmstadt)

Yang Tian (Huawei Technologies Co. Ltd.)

Research Group
Signal Processing Systems
DOI related publication
https://doi.org/10.1109/JSTSP.2025.3590639
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Signal Processing Systems
Issue number
7
Volume number
19
Pages (from-to)
1493-1513
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In this paper, we design Graph Neural Networks (GNNs) with attention mechanisms to tackle an important yet challenging nonlinear regression problem: massive network localization. We first review our previous network localization method based on Graph Convolutional Network (GCN), which can exhibit state-of-the-art localization accuracy, even under severe Non-Line-of-Sight (NLOS) conditions, by carefully preselecting a constant threshold for determining adjacency. As an extension, we propose a specially designed Attentional GNN (AGNN) model to resolve the sensitive thresholding issue of the GCN-based method and enhance the underlying model capacity. The AGNN comprises an Adjacency Learning Module (ALM) and Multiple Graph Attention Layers (MGALs), employing distinct attention architectures to systematically address the demerits of the GCN-based method, rendering it more practical for real-world applications. Comprehensive analyses are conducted to explain the superior performance of these methods, including a theoretical analysis of the AGNN's dynamic attention property and computational complexity, along with a systematic discussion of their robust characteristic against NLOS measurements. Extensive experimental results demonstrate the effectiveness of the GCN-based and AGNN-based network localization methods. Notably, integrating attention mechanisms into the AGNN yields substantial improvements in localization accuracy, approaching the fundamental lower bound and showing approximately 37% to 53% reduction in localization error compared to the vanilla GCN-based method across various NLOS noise configurations. Both methods outperform all competing approaches by far in terms of localization accuracy, robustness, and computational time, especially for considerably large network sizes.

Files

Taverne
warning

File under embargo until 22-06-2026