The Circle of DL-SCA: Improving Deep Learning-based Side-channel Analysis

More Info


For almost three decades, side-channel analysis has represented a realistic and severe threat to embedded devices' security. As a well-known and influential class of implementation attacks, side-channel analysis has been applied against cryptographic implementations, processors, communication systems, and, more recently, machine learning models. Two reasons make these attacks powerful. First, they take advantage of unintended information leakages that the security designer could easily forget. These leakages can be conveyed from various sources, such as power consumption, electromagnetic emanations, time, temperature, and acoustic and photonic emissions. Protection from such leakages can be challenging and costly. Second, such attacks do not require complicated and expensive equipment or frameworks. Commonly, an adversary uses an oscilloscope to monitor some of those side-channel leakages, then performs statistical analysis to find the relation between the leakages and the actual executed values, and finally uses these relations to recover secret information.

Fortunately, hardware and software developers are prepared for these attack methods. Several protection mechanisms, also called side-channel countermeasures, have been implemented to increase the security assurance of their devices. However, this cat-and-mouse game is now changed because of the rising of artificial intelligence in side-channel analysis. Some countermeasures, resilient to conventional methods, can be easily bypassed by machine learning. This thesis aims to improve the capability of side-channel analysis using deep learning techniques. Specifically, we propose approaches covering complete deep learning-based side-channel analysis procedures (we denote them as "The Circle of DL-SCA"). Before applying the leakages to launch actual attacks, in chapter 2, we offer strategies for improving leakage's ''quality'' from various aspects. Then, in chapter 3, the study focuses on critical deep learning hyperparameters and proposes two automated neural architecture search methods that release the burden of the evaluation in tuning the neural network.

Besides developing new attack strategies, we also focus on the existing attack methods and investigate how to enhance their efficiency, robustness, and explainability. Chapter 4 introduces an efficient learning scheme that can reduce the required training traces. Then, we develop an attack evaluation metric that can reliably reflect the performance and robustness of the model. In chapter 5, we create a novel methodology to evaluate the influence of noise and countermeasures on deep-learning models, then apply the research outcomes to design low-cost deep-learning resilient countermeasures. Our research outcomes will push the designers to develop more secure devices. The feed-forward loop between us (researchers) and designers can eventually make the electronic world more secure.