top of page

What do content moderation, the Digital Services Act and AI have in common?

An overview of the evolving landscape in technology, regulation, and platform responsibility


In today’s digital world, online platforms face the massive task of moderating an endless stream of user-generated content. What began as community efforts in the early internet days has evolved into complex practices that use AI-driven content moderation tools. To prevent dissemination of illegal content like hate speech and misinformation, platforms now take a stricter, often “remove first, ask later” approach - raising growing concerns about censorship and free expression.

This blog is written by Silvia A. Carretta, Women in AI's Chief Legal Officer. Silvia is also a soon-to-be doctor in Law and AI at Uppsala University (Sweden) and the Wallenberg AI, Autonomous Systems and Software Program - Humanity and Society Graduate School. Her research involves platform regulation, content moderation and broader private law issues arising from emerging technologies.


Here is her overview of the evolving landscape in technology, regulation, and platform responsibility.


  1. Content moderation from the early 1990s to today

Content moderation is the process by which online platforms detect, filter, and manage user-generated content based on their Terms and Conditions, Community Guidelines and applicable law provisions. It involves removing or visibility restricting illegal, harmful or conflicting material - such as racism, misinformation, or gender violence - using a mix of automated tools and human moderators. In the 1990s and early 2000s, platforms like AOL and Yahoo! Groups relied on volunteer moderators to remove spam, offensive language and illegal content. These efforts were guided by simple set rules and were limited in scale (Gillespie, 2018). The rise of new platforms - like Facebook, YouTube, and Twitter/X - drastically increased the volume of user-generated content online. It has been estimated that - every minute - TikTok users upload 16,000 videos, Netflix subscribers stream the equivalent of 362,962 hours of content, and over 3,4 million videos are watched on YouTube (Domo, 2024). It was obvious that manual content moderation was quickly becoming inadequate, prompting platforms to develop scalable and more proactive processes to tackle growing online harassment, copyright infringement and hate speech (Bradshaw et al., 2020). The COVID-19 pandemic and the 2020 U.S. election further shifted the content moderation landscape. Faced with increasing pressure to act against health misinformation and conspiracy theories, platforms sought to impose stricter moderation rules (Morning Consult, 2019).

Given the vast amount of content generated by users, online platforms became more proactive in moderating. Otherwise the platforms risked being directly liable for the unlawful behaviour of their users. To avoid liability, the platforms began over-removing legal content. This  ‘remove first, ask questions later’ approach, however, also negatively impacts users’ fundamental rights, especially freedom of expression.


  1. Regulation and the Digital Services Act

In October 2022, the European Union issued the Digital Services Act (DSA), marking a pivotal change in regulating online platforms. Its main goal was to create a safer and open online space for Europeans while protecting their fundamental rights. A core provision, Article 6 DSA, introduces an obligation for online platforms to act “diligently, expeditiously, and objectively” to remove illegal and harmful online content and stop the spread of disinformation. They shall transparently communicate decisions to the user who generated the content to offer the possibility for appeal via redress mechanisms. Beyond this curatorial obligation, the DSA introduces a “notice and action” system to flag illegal content, mandatory transparency reports and systemic risk assessments, and auditing obligations, particularly for very large online platforms. 

Globally, other countries are closely monitoring the EU’s approach. While some are considering similar measures, others fear these laws may suppress free speech and divert users to other platforms or communication mediums. Either way, the age of proactive, voluntary moderation is ending, thanks to the enforcement of legal obligations and platform accountability.


  1. The Role of AI in Content Moderation

With the exponential growth of content generated every minute online,  automated keyword filtering and hash-matching technologies were no longer sufficient. Online platforms were looking for other content monitoring technologies and began to increasingly rely on advanced AI-driven moderation tools to moderate different kinds of media (e.g., text, images, audio, and videos). These AI tools were able to analyse vast volumes of data,  detect patterns, and classify content based on predefined criteria and training. The AI tools were also able to  autonomously act to remove harmful posts or block user accounts (Gollatz et al., 2018). This automation allowed for faster response thanks to upload filters and proactive enforcement of “notice and stay-down” obligations. 

Despite these advancements, AI moderation tools struggle to understand nuances and context. For instance, TikTok’s algorithm has been accused of disproportionately flagging minority creators, while X limits post visibility without deleting them through a “shadow moderation” approach, which raises transparency concerns. Moreover, exposure to suicide-related content might increase the likelihood of an individual engaging in suicidal behaviour. Still, it can also be life-saving when directing users to access help resources on time, and it could be viewed as an indispensable support for suicide prevention (Borge et al., 2021). 


  1. Why This Matters for Society and User Fundamental Rights

The rising use of AI moderation tools coincides with technical issues and deep societal concerns due to potential dangers of misuse, monitoring political dissent, and amplifying hate speech and disinformation. The Myanmar’s 2021 coup serves as a stark example. Facebook’s algorithm was accused of amplifying violent and hateful content inciting ethnic violence and contributing to atrocities against the Rohingya minority (Human Rights Council, 2018). A similar pattern occurred during Kenya’s 2022 election when TikTok, Facebook, and X failed to protect electoral integrity and curb misinformation content claiming fake results, conspiracy theories and inciting targeted violent attacks against elected officials (Mozilla Foundation, 2022). 

In conclusion, the intersection of content moderation regulations like the DSA and AI tools shape how we experience our lives on and offline. The challenges online platforms face in removing or limiting visibility to illegal and harmful content - from health misinformation and hate speech to political manipulation - underscore the critical need for responsible deployment of AI-based moderation tools and compliance with regulation. The DSA strongly signals how seriously the EU regulator has taken platform accountability and the need to protect fundamental rights in the digital space. As platforms continue to refine their moderation practices to avoid liability and comply with the DSA framework, a central question must guide the broader debate: how can we ensure that content moderation deployed by private big-tech platforms truly serves the public good, uphold user safety, and protect freedom of expression on the internet?

Online platforms should remain free to self-regulate and innovate within the boundaries set by law. However, it is the role of the regulators to set those legal boundaries and rigorously enforce them, just like the EU did with the DSA. If content moderation is to function as a form of digital governance, its enforcement requires not only private compliance by online platforms but also proper enforcement and supervision by the regulator.



Sources:

Collaborate with us!


As always, we appreciate you taking the time to read our blog post.

If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog. To explore this opportunity, please contact WAI editors Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co), or Dina Blikshteyn (dina@womeninai.co).


Silvia A. Carretta and Dina Blikshteyn

- Editors

Comments


bottom of page