This thesis examines how AI-generated content labels are applied on two major short-form video platforms: TikTok and YouTube Shorts. As generative AI becomes more integrated into social media, platforms have introduced disclosure labels to support transparency and help users dist
...
This thesis examines how AI-generated content labels are applied on two major short-form video platforms: TikTok and YouTube Shorts. As generative AI becomes more integrated into social media, platforms have introduced disclosure labels to support transparency and help users distinguish between real and synthetic content. However, little is known about how these labels are implemented in practice, whether they align with platform policies, and how they affect user engagement.
Using a dataset of 12,315 videos collected through hybrid scraping and API-based methods, this study compares labeling prevalence, source attribution, policy alignment, and potential effects on user metrics such as likes, comments, and views. The findings reveal notable differences: TikTok relies primarily on creator-applied labels with minimal platform enforcement, while YouTube Shorts applies more labels but does not disclose their source. In both cases, labeling practices are often inconsistent and not fully aligned with platform-specific guidelines or the technical capabilities of C2PA, the standard used for AI content detection.
Although some statistically significant differences in engagement were observed, especially on TikTok, the overall effect of labeling on user behavior was limited. The findings highlight key challenges in current AI labeling practices and point to a growing need for more consistent, transparent, and accountable moderation policies. While regulatory frameworks such as the EU’s AI Act and Digital Services Act aim to improve transparency in digital environments, this study shows that current platform practices often fall short of these goals, especially in terms of visibility, attribution, and enforcement of AI labels.