An Empirical Analysis on the Performance of UniXcoder

More Info
expand_more

Abstract

Numerous papers have empirically studied the performance of deep learning based code completion models. However, none of these papers considered nor investigated whether good performance on statically typed languages translates to good performance on dynamically typed languages. A lack of available type information can make code completion more difficult, as many types are interacted with differently. However, natural language in the form of comments could compensate for a lack of available type information. This paper evaluates whether UniXcoder, a state of the NLP model, is able to perform code completion on both dynamically and statically typed languages with similar performance. Furthermore, the impact of the presence of type annotations and comments is assessed. We show that UniXcoder is able to utilize type annotations and comments in order to improve code completion performance, and that using only singleline comments yields better results than using all comments in the source code.