Private Graph Extraction via Feature Explanations

Conference Paper (2023)
Author(s)

Iyiola E. Olatunji (L3S Research Center)

Mandeep Rathee (L3S Research Center)

Thorben Funke (L3S Research Center)

Megha Khosla (TU Delft - Multimedia Computing)

Multimedia Computing
Copyright
© 2023 Iyiola E Olatunji, Mandeep Rathee, Thorben Funke, M. Khosla
DOI related publication
https://doi.org/10.56553/popets-2023-0041
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Iyiola E Olatunji, Mandeep Rathee, Thorben Funke, M. Khosla
Multimedia Computing
Pages (from-to)
59-78
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Privacy and interpretability are two important ingredients for achieving trustworthy machine learning. We study the interplay of these two aspects in graph machine learning through graph reconstruction attacks. The goal of the adversary here is to reconstruct the graph structure of the training data given access to model explanations. Based on the different kinds of auxiliary information available to the adversary, we propose several graph reconstruction attacks. We show that additional knowledge of post-hoc feature explanations substantially increases the success rate of these attacks. Further, we investigate in detail the differences between attack performance with respect to three different classes of explanation methods for graph neural networks: gradient-based, perturbationbased, and surrogate model-based methods. While gradient-based explanations reveal the most in terms of the graph structure, we find that these explanations do not always score high in utility. For the other two classes of explanations, privacy leakage increases with an increase in explanation utility. Finally, we propose a defense based on a randomized response mechanism for releasing the explanations, which substantially reduces the attack success rate. Our code is available at https://github.com/iyempissy/graphstealing- attacks-with-explanation.