Delete or not to Delete

Methodological Reflections on Content Moderation

More Info
expand_more

Abstract

Content moderation is protecting human rights such as freedom of speech, as well as the right to impart and seek information. Online platforms implement rules to moderate content on their platforms through their Terms of Service (ToS), which provides for the legal grounds to delete content. Content moderation is an example of a socio-technical process. The architecture includes a layer that classifies content according to the ToS, followed by human moderation for selected pieces of content. New regulatory approaches, such as the Digital Services Act (DSA) or the Artificial Intelligence Act (AIA) demand more transparency and explainability for moderation systems and the decisions taken. Therefore, this article answers questions about the socio-technical sphere of human moderation: • How is certainty about content moderation decisions perceived within the moderation process?
• How does the measurement of time affect content moderator’s work?
• How much context is needed to take a content moderation decision?
A sample of 1600 pieces of content was coded according to international and national law, as well as on the Community Standards developed by Meta, mimicking a content moderation scenario that includes lex specialis for content moderation – the German Network.