Labeling AI-generated Content on Short-Form Video Platforms

A Cross-Platform Analysis of YouTube Shorts and TikTok

Master Thesis (2025)
Author(s)

M.E. Kuipers (TU Delft - Technology, Policy and Management)

Contributor(s)

S. Zannettou – Graduation committee member (TU Delft - Organisation & Governance)

Martijn Warnier – Graduation committee member (TU Delft - Multi Actor Systems)

Rogier Schröder – Graduation committee member (KPMG Netherlands)

Faculty
Technology, Policy and Management
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
10-06-2025
Awarding Institution
Delft University of Technology
Programme
['Engineering and Policy Analysis']
Faculty
Technology, Policy and Management
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This thesis examines how AI-generated content labels are applied on two major short-form video platforms: TikTok and YouTube Shorts. As generative AI becomes more integrated into social media, platforms have introduced disclosure labels to support transparency and help users distinguish between real and synthetic content. However, little is known about how these labels are implemented in practice, whether they align with platform policies, and how they affect user engagement.

Using a dataset of 12,315 videos collected through hybrid scraping and API-based methods, this study compares labeling prevalence, source attribution, policy alignment, and potential effects on user metrics such as likes, comments, and views. The findings reveal notable differences: TikTok relies primarily on creator-applied labels with minimal platform enforcement, while YouTube Shorts applies more labels but does not disclose their source. In both cases, labeling practices are often inconsistent and not fully aligned with platform-specific guidelines or the technical capabilities of C2PA, the standard used for AI content detection.

Although some statistically significant differences in engagement were observed, especially on TikTok, the overall effect of labeling on user behavior was limited. The findings highlight key challenges in current AI labeling practices and point to a growing need for more consistent, transparent, and accountable moderation policies. While regulatory frameworks such as the EU’s AI Act and Digital Services Act aim to improve transparency in digital environments, this study shows that current platform practices often fall short of these goals, especially in terms of visibility, attribution, and enforcement of AI labels.

Files

License info not available