The (ab)use of Open Source Code to Train Large Language Models

Conference Paper (2023)
Authors

A. Al-Kaswan (TU Delft - Software Engineering)

M. Izadi (TU Delft - Software Engineering)

Research Group
Software Engineering
Copyright
© 2023 A. Al-Kaswan, M. Izadi
To reference this document use:
https://doi.org/10.1109/NLBSE59153.2023.00008
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 A. Al-Kaswan, M. Izadi
Research Group
Software Engineering
Pages (from-to)
9-10
ISBN (electronic)
979-8-3503-0178-6
DOI:
https://doi.org/10.1109/NLBSE59153.2023.00008
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In recent years, Large Language Models (LLMs) have gained significant popularity due to their ability to generate human-like text and their potential applications in various fields, such as Software Engineering. LLMs for Code are commonly trained on large unsanitized corpora of source code scraped from the Internet. The content of these datasets is memorized and emitted by the models, often in a verbatim manner. In this work, we will discuss the security, privacy, and licensing implications of memorization. We argue why the use of copyleft code to train LLMs is a legal and ethical dilemma. Finally, we provide four actionable recommendations to address this issue.

Files

NLBSE_Position_Paper_2_.pdf
(pdf | 0.173 Mb)
License info not available
The_abuse_of_Open_Source_Code_... (pdf)
(pdf | 0.236 Mb)
- Embargo expired in 05-02-2024
License info not available