A Cross-Lingual Evaluation of CodeGen's Performance in Code Completion

More Info
expand_more

Abstract

We present an investigation into the relationship between the average depth of the first correct prediction and the performance of CodeGen. This was done on a dataset comprised of code files comprised of C++, Go, Java, Julia, Kotlin, and Python. The analysis involved investigating the model's predictions at different layers using a Tuned Lens, which enables examining the intermediate representations. Additionally, attention heads were examined to gain insights into the model's behavior. We found that there is a subset of four layers in which tokens are predicted correctly for the first time. These peaks are evident in CodeGen's performance and come after a small dip, a dip that is present in the last layer. The results shed light on the varying performance of different layers and provide valuable insights into the strengths and weaknesses of CodeGen. These findings contribute to our greater understanding of language model performance in code completion tasks and provide implications for future improvements in this domain.