Can't LLMs do that? Supporting Third-Party Audits under the DSA: Exploring Large Language Models for Systemic Risk Evaluation of the Digital Services Act in an Interdisciplinary Setting
M.T. Sekwenz (TU Delft - Organisation & Governance)
R. Gsenger (TU Delft - Organisation & Governance)
Volker Stocker (Weizenbaum Institut)
Esther Görnemann (Weizenbaum Institut)
Dinara Talypova (Interdisciplinary Transformation University Austria)
S.E. Parkin (TU Delft - Organisation & Governance)
Lea Greminger (Weizenbaum Institut)
G. Smaragdakis (TU Delft - Cyber Security)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This paper investigates the feasibility and potential role of using Large Language Models (LLMs) to support systemic risk audits under the European Union’s Digital Services Act (DSA). It examines how automated tools can enhance the work of DSA auditors and other ecosystem actors by enabling scalable, explainable, and legally grounded content analysis. An interdisciplinary expert workshop with twelve participants from legal, technical, and social science backgrounds explored prompting strategies for LLM-assisted auditing. Thematic analysis of the sessions identified key challenges and design considerations, including prompt engineering, model interpretability, legal alignment, and user empowerment. Findings highlight the potential of LLMs to improve annotation workflows and expand audit scale, while underscoring the continued importance of human oversight, iterative testing, and cross-disciplinary collaboration. This study offers practical insights for integrating AI tools into auditing processes and contributes to emerging methodologies for operationalizing systemic risk evaluations under the DSA.