Investigating the Performance of Language Models for Completing Code in Functional Programming Languages

A Haskell Case Study

Conference Paper (2024)
Authors

Tim van Dam (Student TU Delft)

Frank van der Heijden (Student TU Delft)

Philippe de Bekker (Student TU Delft)

Berend Nieuwschepen (Student TU Delft)

Marc Otten (Student TU Delft)

Maliheh Izadi (TU Delft - Software Engineering)

Research Group
Software Engineering
To reference this document use:
https://doi.org/10.1145/3650105.3652289
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Software Engineering
Pages (from-to)
91-102
ISBN (electronic)
979-8-4007-0609-7
DOI:
https://doi.org/10.1145/3650105.3652289
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Language model-based code completion models have quickly grown in use, helping thousands of developers write code in many different programming languages. However, research on code completion models typically focuses on imperative languages such as Python and JavaScript, which results in a lack of representation for functional programming languages. Consequently, these models often perform poorly on functional languages such as Haskell. To investigate whether this can be alleviated, we evaluate the performance of two language models for code, CodeGPT and UniXcoder, on the functional programming language Haskell. We fine-tune and evaluate the models on Haskell functions sourced from a publicly accessible Haskell dataset on HuggingFace. Additionally, we manually evaluate the models using our novel translated HumanEval dataset. Our automatic evaluation shows that knowledge of imperative programming languages in the pre-training of LLMs may not transfer well to functional languages, but that code completion on functional languages is feasible. Consequently, this shows the need for more high-quality Haskell datasets. A manual evaluation on HumanEval-Haskell indicates CodeGPT frequently generates empty predictions and extra comments, while UniXcoder more often produces incomplete or incorrect predictions. Finally, we release HumanEval-Haskell, along with the fine-tuned models and all code required to reproduce our experiments on GitHub.