Title
Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binaries
Author
Al-Kaswan, A. (TU Delft Software Engineering) ![ORCID 0000-0001-7338-2044 ORCID 0000-0001-7338-2044](/sites/all/themes/tud_repo3/img/icons/orcid_16x16.png)
Ahmed, Toufique (University of California)
Izadi, M. (TU Delft Software Engineering)
Sawant, Anand Ashok (University of California)
Devanbu, Premkumar (University of California)
van Deursen, A. (TU Delft Software Technology) ![ORCID 0000-0003-4850-3312 ORCID 0000-0003-4850-3312](/sites/all/themes/tud_repo3/img/icons/orcid_16x16.png)
Contributor
Ceballos, Cristina (editor)
Department
Software Technology
Date
2023
Abstract
Binary reverse engineering is used to understand and analyse programs for which the source code is unavailable. Decompilers can help, transforming opaque binaries into a more readable source code-like representation. Still, reverse engineering is difficult and costly, involving considering effort in labelling code with helpful summaries. While the automated summarisation of decompiled code can help reverse engineers understand and analyse binaries, current work mainly focuses on summarising source code, and no suitable dataset exists for this task. In this work, we extend large pre-trained language models of source code to summarise de-compiled binary functions. Further-more, we investigate the impact of input and data properties on the performance of such models. Our approach consists of two main components; the data and the model. We first build CAPYBARA, a dataset of 214K decompiled function-documentation pairs across various compiler optimisations. We extend CAPYBARA further by removing identifiers, and deduplicating the data. Next, we fine-tune the CodeT5 base model with CAPYBARA to create BinT5. BinT5 achieves the state-of-the-art BLEU-4 score of 60.83, 58.82 and, 44.21 for summarising source, decompiled, and obfuscated decompiled code, respectively. This indicates that these models can be extended to decompiled binaries successfully. Finally, we found that the performance of BinT5 is not heavily dependent on the dataset size and compiler optimisation level. We recommend future research to further investigate transferring knowledge when working with less expressive input formats such as stripped binaries.
Subject
Decompilation
Binary
Reverse Engineering
Summarization
Deep Learning
Pre-trained Language Models
CodeT5
Transformers
To reference this document use:
http://resolver.tudelft.nl/uuid:ede25e1d-fa1e-4091-8f8d-3874c783d2a9
DOI
https://doi.org/10.1109/SANER56733.2023.00033
Publisher
IEEE, Piscataway
Embargo date
2023-11-15
ISBN
978-1-6654-5279-3
Source
Proceedings of the 30th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)
Event
2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), 2023-03-21 → 2023-03-24, Taipa, Macao
Bibliographical note
Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
Part of collection
Institutional Repository
Document type
conference paper
Rights
© 2023 A. Al-Kaswan, Toufique Ahmed, M. Izadi, Anand Ashok Sawant, Premkumar Devanbu, A. van Deursen