Deep Neural Networks Aiding Cryptanalysis

A Case Study of the Speck Distinguisher

Conference Paper (2022)
Author(s)

Norica Băcuieți (ETH Zürich, Politehnica University of Timisoara)

Lejla Batina (Radboud Universiteit Nijmegen)

Stjepan Picek (Radboud Universiteit Nijmegen, TU Delft - Cyber Security)

Research Group
Cyber Security
Copyright
© 2022 Norica Băcuieți, Lejla Batina, S. Picek
DOI related publication
https://doi.org/10.1007/978-3-031-09234-3_40
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Norica Băcuieți, Lejla Batina, S. Picek
Research Group
Cyber Security
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.@en
Pages (from-to)
809-829
ISBN (print)
978-3-031-09233-6
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

At CRYPTO’19, A. Gohr proposed neural distinguishers for the lightweight block cipher Speck32/64, achieving better results than the state-of-the-art at that point. However, the motivation for using that particular architecture was not very clear; therefore, in this paper, we study the depth-10 and depth-1 neural distinguishers proposed by Gohr [7] with the aim of finding out whether smaller or better-performing distinguishers for Speck32/64 exist. We first evaluate whether we can find smaller neural networks that match the accuracy of the proposed distinguishers. We answer this question in the affirmative with the depth-1 distinguisher successfully pruned, resulting in a network that remained within one percentage point of the unpruned network’s performance. Having found a smaller network that achieves the same performance, we examine whether its performance can be improved as well. We also study whether processing the input before giving it to the pruned depth-1 network would improve its performance. To this end, convolutional autoencoders were found that managed to reconstruct the ciphertext pairs successfully, and their trained encoders were used as a preprocessor before training the pruned depth-1 network. We found that, even though the autoencoders achieved a nearly perfect reconstruction, the pruned network did not have the necessary complexity anymore to extract useful information from the preprocessed input, motivating us to look at the feature importance to get more insights. To achieve this, we used LIME, with results showing that a stronger explainer is needed to assess it correctly.

Files

978_3_031_09234_3_40.pdf
(pdf | 1.23 Mb)
- Embargo expired in 01-07-2023
License info not available