Large language models (LLMs) are increasingly used by children, yet their responses  are often not tailored to young users’ reading levels or cognitive development. Previous  attempts to improve content readability through prompt modifications such as adding  "for kids" have show
                                ...
                            
                         
                        
                        
                            Large language models (LLMs) are increasingly used by children, yet their responses  are often not tailored to young users’ reading levels or cognitive development. Previous  attempts to improve content readability through prompt modifications such as adding  "for kids" have shown limited success. This project explores an alternative strategy:  persona-based prompting. Rather than directly specifying the target audience, we  instruct the LLM to role-play a teacher as a familiar figure to children. Using real child authored queries, we evaluate whether this role-based approach leads to more readable  and comprehensible responses across different LLMs. Readability and comprehension  were measured using established metrics, including Flesch-Kincaid formulas and Age of  Acquisition data. Our results show that for 4 out of the four evaluated models, persona based prompting consistently produces responses that are more readable and accessible  across all readability metrics and some comprehensibility metrics compared to standard  or intended-user prompting. This finding suggests that persona-based prompting is a  promising strategy for improving the suitability of LLM outputs for young audiences.