Deterministic and Statistical Strategies to Protect ANNs against Fault Injection Attacks

Conference Paper (2021)
Author(s)

Troya Çağıl Köylü (TU Delft - Computer Engineering)

Cezar Reinbrecht (TU Delft - Computer Engineering)

S Hamdioui (TU Delft - Quantum & Computer Engineering)

M. Taouil (TU Delft - Computer Engineering)

Research Group
Computer Engineering
Copyright
© 2021 T.C. Köylü, Cezar Reinbrecht, S. Hamdioui, M. Taouil
DOI related publication
https://doi.org/10.1109/PST52912.2021.9647763
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 T.C. Köylü, Cezar Reinbrecht, S. Hamdioui, M. Taouil
Research Group
Computer Engineering
Pages (from-to)
1-10
ISBN (print)
978-1-6654-0185-2
ISBN (electronic)
978-1-6654-0184-5
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Artificial neural networks are currently used for many tasks, including safety critical ones such as automated driving. Hence, it is very important to protect them against faults and fault attacks. In this work, we propose two fault injection attack detection mechanisms: one based on using output labels for a reference input, and the other on the activations of neurons. First, we calibrate our detectors during normal conditions. Thereafter, we verify them to maximize fault detection performance. To prove the effectiveness of our solution, we consider highly employed neural networks (AlexNet, GoogleNet, and VGG) with their associated dataset ImageNet. Our results show that for both detectors we are able to obtain a high rate of coverage against faults, typically above 96%. Moreover, the hardware and software implementations of our detector indicate an extremely low area and time overhead.

Files

No_threadmark.pdf
(pdf | 0.623 Mb)
License info not available