To Know What You Do Not Know
Challenges for Explainable AI for Security and Threat Intelligence
Sarah van Gerwen (Vrije Universiteit Amsterdam)
J.E. Constantino Torres (TU Delft - Organisation & Governance)
Ritten Roothaert (Vrije Universiteit Amsterdam)
Brecht Weerheijm (Universiteit Leiden)
Ben Wagner (TU Delft - Organisation & Governance)
Gregor Pavlin (Thales Research and Technology)
Bram Klievink (Universiteit Leiden)
Stefan Schlobach (Vrije Universiteit Amsterdam)
Katja Tuma (Vrije Universiteit Amsterdam)
Fabio Massacci (Vrije Universiteit Amsterdam, UniversitĂ degli Studi di Trento)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Human analysts working for threat intelligence leverage tools powered by artificial intelligence to routinely assemble actionable intelligence. Yet, threat intelligence sources and methods often have significant uncertainties and biases. In addition, data sharing might be limited for operational, strategic, or legal reasons. Experts are aware of these limitations but lack formal means to represent and quantify these uncertainties in their daily work. In this chapter, we enunciate the technical, legal, and societal challenges for building explainable AI for threat intelligence. We also discuss ideas for overcoming these challenges.