Large Language Models (LLMs) have shown impressive performance in various domains, including software engineering. Code generation, a crucial aspect of software development, has seen significant improvements with the integration of AI tools. While existing LLMs have show very goo
...
Large Language Models (LLMs) have shown impressive performance in various domains, including software engineering. Code generation, a crucial aspect of software development, has seen significant improvements with the integration of AI tools. While existing LLMs have show very good performance in generating code for everyday tasks, their application in industrial settings and domain-specific contexts remains largely unexplored. This thesis investigates the potential of LLMs to generate code in proprietary, domain-specific environments, with a specific focus on the leveling department at ASML. The primary goal of this research is to assess the ability of LLMs to adapt to a domain they have not encountered before and to generate complex, interdependent code in a domain-specific repository. This involves evaluating the performance of LLMs in generating code that meets the specific requirements of ASML. To achieve this, the thesis investigates various prompting techniques, compares the performance of generic and code-specific LLMs, and examines the impact of model size on code generation capabilities. To evaluate the code generation capabilities of LLMs in repository-level scenarios, we introduce a new performance metric, build@k, designed to measure the effectiveness of generated code in compiling and building projects. The results showed that both prompting techniques and model size have a substantial influence on the code generation capabilities of LLMs. However, the performance difference between code-specific and generic LLMs was less pronounced and varied substantially across different model families.