Gambling for Success

The Lottery Ticket Hypothesis in Deep Learning-Based Side-Channel Analysis

Book Chapter (2022)
Author(s)

Guilherme Perin (TU Delft - Cyber Security)

Lichao Wu (TU Delft - Cyber Security)

S. Picek (Radboud Universiteit Nijmegen, TU Delft - Cyber Security)

Research Group
Cyber Security
Copyright
© 2022 G. Perin, L. Wu, S. Picek
DOI related publication
https://doi.org/10.1007/978-3-030-97087-1_9
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 G. Perin, L. Wu, S. Picek
Research Group
Cyber Security
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
Pages (from-to)
217-241
ISBN (electronic)
978-3-030-97087-1
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Deep learning-based side-channel analysis (SCA) represents a strong approach for profiling attacks. Still, this does not mean it is trivial to find neural networks that perform well for any setting. Based on the developed neural network architectures, we can distinguish between small neural networks that are easier to tune and less prone to overfitting but could have insufficient capacity to model the data. On the other hand, large neural networks have sufficient capacity but can overfit and are more difficult to tune. This brings an interesting trade-off between simplicity and performance. This work proposes to use a pruning strategy and recently proposed Lottery Ticket Hypothesis (LTH) as an efficient method to tune deep neural networks for profiling SCA. Pruning provides a regularization effect on deep neural networks and reduces the overfitting posed by overparameterized models. We demonstrate that we can find pruned neural networks that perform on the level of larger networks, where we manage to reduce the number of weights by more than 90% on average. This way, pruning and LTH approaches become alternatives to costly and difficult hyperparameter tuning in profiling SCA. Our analysis is conducted over different masked AES datasets and for different neural network topologies. Our results indicate that pruning, and more specifically LTH, can result in competitive deep learning models.

Files

978_3_030_97087_1_9.pdf
(pdf | 1.61 Mb)
- Embargo expired in 02-01-2023
License info not available