Content moderation is a vital condition that online platforms must facilitate, according to the law, to create suitable online environments for their users. By the law, we mean national or European laws that require the removal of content by online platforms, such as EU Regula
...
Content moderation is a vital condition that online platforms must facilitate, according to the law, to create suitable online environments for their users. By the law, we mean national or European laws that require the removal of content by online platforms, such as EU Regulation 2021/784, which addresses the dissemination of terrorist content online. Content moderation required by these national or European laws, summarised here as ‘the law’, is different from the moderation of pieces of content that is not directly required by law but instead is conducted voluntarily by the platforms. New regulatory requests create an additional layer of complexity of legal grounds for the moderation of content and are relevant to platforms’ daily decisions. The decisions made are either grounded in reasons stemming from different sources of law, such as international or national provisions, or can be based on contractual grounds, such as the platform's Terms of Service and Community Standards. However, how to empirically measure these essential aspects of content moderation remains unclear. Therefore, we ask the following research question: How do online platforms interpret the law when they moderate online content? To understand this complex interplay and empirically test the quality of a platform's content moderation claims, this article develops a methodology that facilitates empirical evidence of the individual decisions taken per piece of content while highlighting the subjective element of content classification by human moderators. We then apply this methodology to a single empirical case, an anonymous medium-sized German platform that provided us access to their content moderation decisions. With more knowledge of how platforms interpret the law, we can better understand the complex nature of content moderation, its regulation and compliance practices, and to what degree legal moderation might differ from moderation due to contractual reasons in dimensions such as the need for context, information, and time. Our results show considerable divergence between the platform's interpretation of the law and ours. We believe that a significant number of platform legal interpretations are incorrect due to divergent interpretations of the law and that platforms are removing legal content that they falsely believe to be illegal (‘overblocking’) while simultaneously not moderating illegal content (‘underblocking’). In conclusion, we provide recommendations for content moderation system design that takes (legal) human content moderation into account and creates new methodological ways to test its quality and effect on speech in online platforms.
@en