SoK

Explainable Machine Learning for Computer Security Applications

Conference Paper (2023)
Author(s)

Azqa Nadeem (TU Delft - Cyber Security)

Daniël Vos (TU Delft - Cyber Security)

Clinton Cao (TU Delft - Cyber Security)

Luca Pajola (University of Padua)

Simon Dieck (TU Delft - Cyber Security)

R. Baumgartner (TU Delft - Cyber Security)

Sicco Verwer (TU Delft - Cyber Security)

Research Group
Cyber Security
Copyright
© 2023 A. Nadeem, D.A. Vos, C.S. Cao, Luca Pajola, S. Dieck, R. Baumgartner, S.E. Verwer
DOI related publication
https://doi.org/10.1109/EuroSP57164.2023.00022
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 A. Nadeem, D.A. Vos, C.S. Cao, Luca Pajola, S. Dieck, R. Baumgartner, S.E. Verwer
Research Group
Cyber Security
Pages (from-to)
221-240
ISBN (print)
978-1-6654-6513-7
ISBN (electronic)
978-1-6654-6512-0
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Explainable Artificial Intelligence (XAI) aims to improve the transparency of machine learning (ML) pipelines. We systematize the increasingly growing (but fragmented) microcosm of studies that develop and utilize XAI methods for defensive and offensive cybersecurity tasks. We identify 3 cybersecurity stakeholders, i.e., model users, designers, and adversaries, who utilize XAI for 4 distinct objectives within an ML pipeline, namely 1) XAI-enabled user assistance, 2) XAI-enabled model verification, 3) explanation verification & robustness, and 4) offensive use of explanations. Our analysis of the literature indicates that many of the XAI applications are designed with little understanding of how they might be integrated into analyst workflows – user studies for explanation evaluation are conducted in only 14% of the cases. The security literature sometimes also fails to disentangle the role of the various stakeholders, e.g., by providing explanations to model users and designers while also exposing them to adversaries. Additionally, the role of model designers is particularly minimized in the security literature. To this end, we present an illustrative tutorial for model designers, demonstrating how XAI can help with model verification. We also discuss scenarios where interpretability by design may be a better alternative. The systematization and the tutorial enable us to challenge several assumptions, and present open problems that can help shape the future of XAI research within cybersecurity.

Files

SoK_Explainable_Machine_Learni... (pdf)
(pdf | 1.27 Mb)
- Embargo expired in 31-01-2024
License info not available