Little or Large?

The effects of network size on AI explainability in Side-Channel Attacks

Bachelor Thesis (2020)
Author(s)

D.D.M. Moonen (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Stjepan Picek – Graduation committee member (TU Delft - Cyber Security)

Marina Krček – Mentor (TU Delft - Cyber Security)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2020 Djoshua Moonen
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 Djoshua Moonen
Graduation Date
25-06-2020
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

For a system to be able to interpret data, learn from it, and use those learnings to reach goals and perform tasks is what it means to be intelligent [1]. Since systems are not a product of nature, but rather made by humans they are called Artificial Intelligence (AI). The field of Side-Channel Attacks (SCA) has benefited from applying AI systems to their problems. Operations previously to resource-intensive to perform can now be executed using AI. Currently, the focus lies on exploring which parameters result in the optimum performance when classifying side-channel traces. But since this application has only recently been applied, there is much more research to be done. As of now, the literature claims that a reduction in the size of the architecture would result in an improvement of the explainability of the models used. However, this change in explainability has not been explicitly proven to hold for SCA models. This created a gap in knowledge. This paper aims to close this gap by exploring these assumptions. The goal is to explore if a reduction in complexity of SCA models leads to improved explainability. An experiment was conducted using two existing SCA architectures with a small and large complexity respectively. Using heatmaps, the explainability of these models were assessed to investigate the existence of patterns. The results show a difference in the consistency of the classification process, where the model with the lowest complexity could more consistently state why a certain classification was made. The results indicate that the explainability of a given SCA model can be improved by decreasing its complexity.

Files

License info not available