Reasoning cartographic knowledge in deep learning-based map generalization with explainable AI

Journal Article (2024)
Author(s)

Cheng Fu (Universitat Zurich)

Zhiyong Zhou (Universitat Zurich)

Y. Xin (ETH Zürich)

Robert Weibel (Universitat Zurich)

Affiliation
External organisation
DOI related publication
https://doi.org/10.1080/13658816.2024.2369535
More Info
expand_more
Publication Year
2024
Language
English
Affiliation
External organisation
Issue number
10
Volume number
38
Pages (from-to)
2061-2082

Abstract

Cartographic map generalization involves complex rules, and a full automation has still not been achieved, despite many efforts over the past few decades. Pioneering studies show that some map generalization tasks can be partially automated by deep neural networks (DNNs). However, DNNs are still used as black-box models in previous studies. We argue that integrating explainable AI (XAI) into a DL-based map generalization process can give more insights to develop and refine the DNNs by understanding what cartographic knowledge exactly is learned. Following an XAI framework for an empirical case study, visual analytics and quantitative experiments were applied to explain the importance of input features regarding the prediction of a pre-trained ResU-Net model. This experimental case study finds that the XAI-based visualization results can easily be interpreted by human experts. With the proposed XAI workflow, we further find that the DNN pays more attention to the building boundaries than the interior parts of the buildings. We thus suggest that boundary intersection over union is a better evaluation metric than commonly used intersection over union in qualifying raster-based map generalization results. Overall, this study shows the necessity and feasibility of integrating XAI as part of future DL-based map generalization development frameworks.

No files available

Metadata only record. There are no files for this record.