The rapid advancement of Large Language Models (LLMs) in recent years is not without concerns, such as a lack of privacy, environmental impact, and financial concerns. It might therefore be beneficial to use Small Language Models (SLMs) instead, which are more accessible to be ru
...
The rapid advancement of Large Language Models (LLMs) in recent years is not without concerns, such as a lack of privacy, environmental impact, and financial concerns. It might therefore be beneficial to use Small Language Models (SLMs) instead, which are more accessible to be run by individuals or organisations, thus resulting in more control over the model. This research investigates whether we can replace an LLM with an SLM inside an AI hint-generation system, and achieve comparable hint quality, by conducting an expert study to validate generated hints based on a set of criteria and by conducting a student experiment, investigating student satisfaction and trust in the system. The expert results show that the hints generated by the SLM-powered system are slightly less personalised to the situation, are noticeably more misleading and more often suggest the wrong approach. The student experiment shows similar results for these criteria, and shows a slight decrease in the overall perceived helpfulness of the hints, trust in the system and willingness to continue using the system. The most prevalent complaint for the SLM-powered system was its inconsistency in the hint quality, as it generated good and useful hints in some contexts, but also suggested wrong and unusable hints too often. Thus, while replacing the LLM with an SLM has potential, as it is capable of generating useful hints, current SLMs are still too inconsistent.