An Attacker's Dream? Exploring the Capabilities of ChatGPT for Developing Malware

Conference Paper (2023)
Author(s)

Yin Minn Pa Pa (Yokohama National University)

Shunsuke Tanizaki (Yokohama National University)

Tetsui Kou (Yokohama National University)

M. J.G. Eeten (TU Delft - Organisation & Governance, Yokohama National University)

Katsunari Yoshioka (Yokohama National University)

Tsutomu Matsumoto (Yokohama National University)

Research Group
Organisation & Governance
Copyright
© 2023 Yin Minn Pa Pa, Shunsuke Tanizaki, Tetsui Kou, M.J.G. van Eeten, Katsunari Yoshioka, Tsutomu Matsumoto
DOI related publication
https://doi.org/10.1145/3607505.3607513
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Yin Minn Pa Pa, Shunsuke Tanizaki, Tetsui Kou, M.J.G. van Eeten, Katsunari Yoshioka, Tsutomu Matsumoto
Research Group
Organisation & Governance
Pages (from-to)
10-18
ISBN (electronic)
9781450390651
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We investigate the potential for abuse of recent AI advances by developing seven malware programs and two attack tools using ChatGPT, OpenAI Playground's "text-davinci-003"model, and Auto-GPT - an open-source AI agent capable of generating automated prompts to accomplish user-defined goals. We confirm that: 1) Under the safety and moderation control of recent AI systems, it is possible to generate the functional malware and attack tools (up to about 400 lines of code) within 90 minutes, including the debugging time. 2) Auto-GPT does not ease the hurdle of generating the right prompts for malware generation, but it evades the safety controls of OpenAI with its automatically generated prompts. When given goals with sufficient details, it writes the code in nine of nine malware and attack tools we tested. 3) There is still room to improve the moderation and safety controls of ChatGPT and text-davinci-003 model, especially for the growing jailbreak prompts. Overall, we find that recent AI advances, including ChatGPT, Auto-GPT, and text-davinci-003, demonstrate the potential for generating malware and attack tools under safety and moderation control, highlighting the need for improved safety measures and enhanced safety controls in AI systems.

Files

3607505.3607513.pdf
(pdf | 1.23 Mb)
- Embargo expired in 21-02-2024
License info not available