Improving machine learning based side-channel analysis: can dropout be dropped out?
More Info
expand_more
Abstract
Analysing physical leakages (e.g. power consumption and electromagnetic radiation) of cryptographic devices can be used by adversaries to extract secret keys. Over the last couple of years, researchers have shown that machine learning has potential for this process. Machine learning models need to be fine-tuned to enhance key extraction performance. This paper investigates the so-called dropout hyper-parameter, which is proven to reduce overfitting in various domains (e.g. speech recognition). Dropout is examined for two different models: multilayer perceptrons and convolutional neural networks. Regarding the convolutional neural networks, two architectures are examined: one architecture used as a benchmark in various papers and a more uncomplicated one. The findings of this paper showed that adding dropout to the investigated multilayer perceptron architecture led to significant improvements, whereas the convolutional neural network architectures showed negligible improvements.