Persona-Based Prompting: Enhancing Readability and Understanding in AI Responses for children
J.B. de Castro (TU Delft - Electrical Engineering, Mathematics and Computer Science)
M.S. Pera – Mentor (TU Delft - Web Information Systems)
Catholijn Jonker – Graduation committee member (TU Delft - Interactive Intelligence)
H. Chakrabarti – Mentor (TU Delft - Web Information Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Large language models (LLMs) are increasingly used by children, yet their responses are often not tailored to young users’ reading levels or cognitive development. Previous attempts to improve content readability through prompt modifications such as adding "for kids" have shown limited success. This project explores an alternative strategy: persona-based prompting. Rather than directly specifying the target audience, we instruct the LLM to role-play a teacher as a familiar figure to children. Using real child authored queries, we evaluate whether this role-based approach leads to more readable and comprehensible responses across different LLMs. Readability and comprehension were measured using established metrics, including Flesch-Kincaid formulas and Age of Acquisition data. Our results show that for 4 out of the four evaluated models, persona based prompting consistently produces responses that are more readable and accessible across all readability metrics and some comprehensibility metrics compared to standard or intended-user prompting. This finding suggests that persona-based prompting is a promising strategy for improving the suitability of LLM outputs for young audiences.