Co-Creating a Human-Centered AI Learning System for the Future of Education
Z.H. Krullaars (TU Delft - Industrial Design Engineering)
J.D. Lomas – Mentor (TU Delft - Human Technology Relations)
J.H. Boyle – Mentor (TU Delft - Materializing Futures)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
As Artificial Intelligence (AI) becomes a staple in modern classrooms, educators face a growing dilemma: how to effectively embrace personalization without compromising instructional control or inadvertently encouraging student academic dishonesty. This research addresses the current "black box" nature of AI in schools, where students often use tools like ChatGPT as a shortcut to bypass thinking, leaving teachers sidelined and unable to monitor genuine progress. To solve this, the thesis proposes a transition from a two-way (student-AI) interaction to a three-way collaboration involving the teacher, the student, and the AI.
Central to this research is the "Flight Simulator" metaphor, a conceptual framework designed to strike a balance between student autonomy and teacher oversight. In this model, the Student is the Pilot, operating in a safe "cockpit" where they can practice and make mistakes without real-world failure. The AI acts as the Co-Pilot, providing Socratic guidance and hints rather than direct answers. Crucially, the Teacher remains in the Control Tower, setting learning objectives and monitoring high-level data "radar" to intervene only when a student veers off course.
The development of the system followed a three-cycle research journey involving a "Think Tank" of educators and students. Cycle 1 (Exploration) revealed that teachers fear becoming "AI police" and that students often over-rely on AI, failing to spot misinformation. These insights led to a set of design requirements focusing on content control, process over answers, and differentiated instruction. Cycle 2 (Prototyping) tested the boundaries of surveillance, revealing that "Structured Autonomy"—where students have private chat spaces but teachers receive progress and mood analytics—was the most effective approach to maintaining trust while ensuring accountability.
The final output is Cubo, a web-based platform featuring two specialized interfaces. The Student Cockpit uses a game-like structure and interest-based metaphors (e.g., explaining physics through soccer) to build AI literacy. It forces students to engage in critical thinking, such as fact-checking AI claims, before completing a lesson. The Teacher Control Tower offers an automated dashboard that highlights struggling students and enables teachers to upload their own curriculum, ensuring the AI remains aligned with specific classroom objectives. To ease the digital transition, the system includes a physical "Starter Kit" to ground the technology in the real world.
The system was evaluated through a 10-day comparative study with 16 students. Results showed that while standard AI tools often decreased student confidence due to "hallucinations" and a lack of direction, the Cubo system significantly increased student confidence and improved their ability to spot misinformation (100% success rate). Teachers rated the system higher for ease of monitoring and felt empowered to return to their roles as mentors rather than administrators. Ultimately, this research demonstrates that AI, when d