Can't LLMs do that? Supporting Third-Party Audits under the DSA: Exploring Large Language Models for Systemic Risk Evaluation of the Digital Services Act in an Interdisciplinary Setting

Conference Paper (2025)
Author(s)

M.T. Sekwenz (TU Delft - Organisation & Governance)

R. Gsenger (TU Delft - Organisation & Governance)

Volker Stocker (Weizenbaum Institut)

Esther Görnemann (Weizenbaum Institut)

Dinara Talypova (Interdisciplinary Transformation University Austria)

S.E. Parkin (TU Delft - Organisation & Governance)

Lea Greminger (Weizenbaum Institut)

G. Smaragdakis (TU Delft - Cyber Security)

Research Group
Cyber Security
DOI related publication
https://doi.org/10.1145/3707640.3731929
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Cyber Security
ISBN (electronic)
979-8-4007-1397-2
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This paper investigates the feasibility and potential role of using Large Language Models (LLMs) to support systemic risk audits under the European Union’s Digital Services Act (DSA). It examines how automated tools can enhance the work of DSA auditors and other ecosystem actors by enabling scalable, explainable, and legally grounded content analysis. An interdisciplinary expert workshop with twelve participants from legal, technical, and social science backgrounds explored prompting strategies for LLM-assisted auditing. Thematic analysis of the sessions identified key challenges and design considerations, including prompt engineering, model interpretability, legal alignment, and user empowerment. Findings highlight the potential of LLMs to improve annotation workflows and expand audit scale, while underscoring the continued importance of human oversight, iterative testing, and cross-disciplinary collaboration. This study offers practical insights for integrating AI tools into auditing processes and contributes to emerging methodologies for operationalizing systemic risk evaluations under the DSA.