How can large language models and prompt engineering be leveraged in Computer Science education?
Systematic literature review
Alexandra Neagu (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Efthimia Aivaloglou – Mentor (TU Delft - Web Information Systems)
X. Zhang – Mentor (TU Delft - Web Information Systems)
T.J. Viering – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
In recent years, significant progress has been made in the field of natural language processing (NLP) through the development of large language models (LLMs) like BERT and ChatGPT. These models have showcased remarkable abilities across a range of NLP tasks. However, effectively harnessing their potential requires meticulous prompt engineering and a comprehensive understanding of their limitations.
Additionally, LLMs have attracted attention in the educational domain for their potential to enhance learning and teaching experiences, particularly in fostering the development of computational thinking skills.
This paper aims to explore the potential of leveraging NLP and prompt engineering techniques to generate successful solutions to coding problems following initial failures. Furthermore, the research explores the potential applications of NLP techniques in teaching and learning practices involving LLMs and their potential drawbacks in this context.