A Cross-Lingual Evaluation of CodeGen's Performance in Code Completion

Bachelor Thesis (2023)
Author(s)

M.L. Keeler (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

A. Van Deursen – Mentor (TU Delft - Software Technology)

Azqa Nadeem – Graduation committee member (TU Delft - Cyber Security)

Maliheh Izadi – Mentor (TU Delft - Software Engineering)

J. Katzy – Mentor (TU Delft - Software Engineering)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2023 Miranda Keeler
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Miranda Keeler
Graduation Date
28-06-2023
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We present an investigation into the relationship between the average depth of the first correct prediction and the performance of CodeGen. This was done on a dataset comprised of code files comprised of C++, Go, Java, Julia, Kotlin, and Python. The analysis involved investigating the model's predictions at different layers using a Tuned Lens, which enables examining the intermediate representations. Additionally, attention heads were examined to gain insights into the model's behavior. We found that there is a subset of four layers in which tokens are predicted correctly for the first time. These peaks are evident in CodeGen's performance and come after a small dip, a dip that is present in the last layer. The results shed light on the varying performance of different layers and provide valuable insights into the strengths and weaknesses of CodeGen. These findings contribute to our greater understanding of language model performance in code completion tasks and provide implications for future improvements in this domain.

Files

CodeShop_Miranda.pdf
(pdf | 1.38 Mb)
License info not available