Navigating value tensions in the use of AI for policy preparation
towards guidelines & a practical tool
D.L. Mieras (TU Delft - Industrial Design Engineering)
L.W.L. Simonse – Graduation committee member (TU Delft - DesIgning Value in Ecosystems)
Kars Alfrink – Graduation committee member (TU Delft - Human-Centred Artificial Intelligence)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This thesis explores how the Dutch government can adopt artificial intelligence (AI) responsibly in the policy development process, a domain that has received little attention compared to AI use in policy execution. The project was conducted in collaboration with the governmental organisation.
AI holds promise for improving policy quality, efficiency, and democratic engagement; it also introduces serious risks, such as depoliticisation, bias, loss of professional judgment, and declines in public trust. These risks, combined with organisational barriers like low AI literacy, limited capacity, and fragmented structures, have led to hesitant adoption within ministries.
The thesis uses a constructive design research method. Answering research question by means of design. With a design project that uses an design approach based Frame Innovation, Vision in Product Design (ViP), and Value Sensitive Design (VSD). Resulting in a prototype tool that is evaluated with civil servants. The design balances encouragement and responsibility, aiming to stimulate AI curiosity, proposed as a key mechanism for learning and soft AI capacity-building, while reinforcing awareness of ethical and procedural boundaries. The tool incites reflection rather than prescription, helping users think critically, recognise dilemmas, and connect to existing support resources, which anchors quality assurance in the Dutch policy process.
This thesis contributes to bridging the gap between theoretical frameworks of responsible AI and practical application in policy preparation.