Regularizers to the rescue

fighting overfitting in deep learning-based side-channel analysis

Journal Article (2024)
Author(s)

A. Rezaeezade (TU Delft - Cyber Security)

Lejla Batina (Radboud Universiteit Nijmegen)

Research Group
Cyber Security
DOI related publication
https://doi.org/10.1007/s13389-024-00361-5
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Cyber Security
Issue number
4
Volume number
14
Pages (from-to)
609-629
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Despite considerable achievements of deep learning-based side-channel analysis, overfitting represents a significant obstacle in finding optimized neural network models. This issue is not unique to the side-channel domain. Regularization techniques are popular solutions to overfitting and have long been used in various domains. At the same time, the works in the side-channel domain show sporadic utilization of regularization techniques. What is more, no systematic study investigates these techniques’ effectiveness. In this paper, we aim to investigate the regularization effectiveness on a randomly selected model, by applying 4 powerful and easy-to-use regularization techniques to 8 combinations of datasets, leakage models, and deep learning topologies. The investigated techniques are L1, L2, dropout, and early stopping. Our results show that while all these techniques can improve performance in many cases, L1 and L2 are the most effective. Finally, if training time matters, early stopping is the best technique.