The Robust Malware Detection Challenge and Greedy Random Accelerated Multi-Bit Search

Conference Paper (2020)
Author(s)

Sicco Verwer (TU Delft - Cyber Security)

Azqa Nadeem (TU Delft - Cyber Security)

Christian A. Hammerschmidt (TU Delft - Cyber Security)

Laurens Bliek (TU Delft - Algorithmics)

Abdullah Al-Dujaili (Massachusetts Institute of Technology)

Una-May O'Reilly (Massachusetts Institute of Technology)

Research Group
Cyber Security
Copyright
© 2020 S.E. Verwer, A. Nadeem, C.A. Hammerschmidt, L. Bliek, Abdullah Al-Dujaili, Una-May O’Reilly
DOI related publication
https://doi.org/10.1145/3411508.3421374
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 S.E. Verwer, A. Nadeem, C.A. Hammerschmidt, L. Bliek, Abdullah Al-Dujaili, Una-May O’Reilly
Research Group
Cyber Security
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.@en
Pages (from-to)
61-70
ISBN (electronic)
978-1-4503-8094-2
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Training classifiers that are robust against adversarially modified examples is becoming increasingly important in practice. In the field of malware detection, adversaries modify malicious binary files to seem benign while preserving their malicious behavior. We report on the results of a recently held robust malware detection challenge. There were two tracks in which teams could participate: the attack track asked for adversarially modified malware samples and the defend track asked for trained neural network classifiers that are robust to such modifications. The teams were unaware of the attacks/defenses they had to detect/evade. Although only 9 teams participated, this unique setting allowed us to make several interesting observations. We also present the challenge winner: GRAMS, a family of novel techniques to train adversarially robust networks that preserve the intended (malicious) functionality and yield high-quality adversarial samples. These samples are used to iteratively train a robust classifier. We show that our techniques, based on discrete optimization techniques, beat purely gradient-based methods. GRAMS obtained first place in both the attack and defend tracks of the competition.

Files

3411508.3421374.pdf
(pdf | 1.34 Mb)
- Embargo expired in 09-05-2021
License info not available