fbpx

The Evolution of Content Moderation Rules Throughout The Years

The birth of the digital public sphere

This article is contributed by Ahmed Medien.

Online forums and social marketplaces have become a large part of the internet in the past 20 years since the early bulletin boards on the internet and AOL chat rooms. Today, users moved primarily to social platforms, platforms that host user-generated content. These platforms comprise the new online public squares where ideas and information are exchanged and debated by anyone. Platforms which offer free services to their users decide on the rules of engagement, and as is the norm, these rules have changed and evolved throughout the past decade and a half. From little to no moderation, platforms have introduced algorithms and guidelines that have shaped public conversations throughout the world (e.g., ethnic violence in Myanmar, BREXIT, the “Stop the Steal” campaign) by their successful implementation or lack thereof. This post summarizes the evolution of content moderation rules and community guidelines of four popular international platforms: Facebook, Twitter, Instagram, and YouTube.

The Timeline

2000–2009: Launch of social platforms

2004: Facebook launches for university students at select US schools

2005: YouTube launches

2007: Facebook becomes available to the world

2009: Major platforms apply PhotoDNA technology to flag and remove child pornography image (CSAM) online

2009: YouTube and Facebook have become global social platforms, experience blocking in at least 13 countries around the world

2010–2019: Launch of standardized community guidelines

2010: Instagram launches

2010–2011: Social media platforms such as Facebook, Twitter, and YouTube play a major role in carrying the voices and reporting from regional protests in North Africa and the Middle East known as the Arab Spring

2010: Facebook releases its first set of Community Standards in English, French, and Spanish

2011: YouTube makes an exception to allow violent videos from the Middle East if they are educational, documentary, or scientific in nature in response to activists in Egypt and Libya exposing police torture

2012: Facebook acquires Instagram

2012: Twitter launches its first transparency report

2012: YouTube removes, blocks “Innocence of Muslims” video in several Muslim countries

2012: Twitter institutes “country withheld tweet” policy (soon after blocked content in Russia and Pakistan)

2012: Documents from Facebook’s content moderation offices leaked for the first time (Gawker)

2013: Facebook launches its first content moderation transparency report

2010–2019 Launch of standardized community guidelines: Misinformation, terror-linked and organized hate content implodes online

2014: ISIS, terror-linked content, and online radicalization become a major issue on social platforms in several countries

2014: The beheading video of American Journalist James Foley appears online amidst a big wave of terror-linked content

2014: YouTube reverses its policy on allowing certain violent videos

2014: Platforms apply a new rule against ¨Dangerous Organizations¨ linked to terrorism

2015–2016: Twitter changes its content moderation rules on harassment after a remarkable harassment case against the stars of the rebooted Ghostbusters

2016: Major platforms are under amateur and state actor-linked information manipulation campaigns during the 2016 US presidential election

2016: Facebook launches a fact-checking program on its platforms and partners with IFCN fact-checking organizations

2016: Platforms fail to stop campaigns of misinformation that spur ethnic violence in Myanmar against the Rohingya minority

2017–2018: Launch of the Global Internet Forum to Counter Terrorism (GIFCT)

2016: FB live starts to attract an increasing number of suicides and live shootings .

2018: Video of the shooting of Philando Castile in the USA is broadcast live on Facebook.

2018: TikTok launches in China

2018: Twitter removes 70 million bot accounts to curb the influence of political misinformation on its platform

2018: YouTube releases its first transparency (enforcement of community guidelines) report

2018: Facebook forms an Oversight Board to rule over restoring removed content

2018: Facebook allows its users to appeal its decisions to remove certain content

2019: Christchurch terrorist attack (originally broadcast live on Facebook Live) leads to the Christchurch Call to (eliminate terrorist content)

2019: Twitter allows its users to appeal its content removal decision

2019: TikTok launches internationally and attracts a new growth of international audiences

2019: The first emergence of the novel coronavirus in China

2020-Present: New rules to moderate content online expand internationally to counter an ever international phenomena of hate speech, election misinformation, and health misinformation

2020-Q1: Novel coronavirus known as COVID-19 spreads around the world; COVID-19 misinformation soon follows on major social platforms

Major social platforms intervene to ban health information that contradicts government and official sources

Major platforms start to label COVID-19 related misinformation at scale

2020 (Q2+Q3): Facebook Oversight Board chooses its first cases

Twitter and Facebook start labeling the posts of US President Donald Trump

Facebook introduces a slew of new content policy shifts that address:

  • Holocaust denial content
  • US-based organizations that promote hate
  • Organized militia groups
  • Conspiracy theories

Major platforms introduce new vetted content around the topic of election integrity in the US after repeated claims of voter fraud

Twitter launches transparency center

2020-Q4: Misinformation around election integrity and fraud intensifies in the US promoted by the US President Donald Trump

Major platforms start fact-checking posts by the US president and other similar users

Facebook starts labeling content in India

Platforms introduce new rules to counter Covid-19 vaccine misinformation

2021-Q1: Jan 6, Facebook suspends the account of US President Donald Trump for violating its community guidelines and inciting violence

Jan 6, Twitter deletes some tweets of US President Donald Trump and locks his account

Jan 8, Facebook, Twitter, YouTube, Instagram ban the account of US President Donald Trump for the remainder of his term in office, block his posts from other accounts

Facebook oversight boards make first rulings on the 6 cases they selected

Facebook Oversight Board announces it will rule on the ban of Donald J. Trump

Facebook amends its groups’ moderation rules, adds new grounds for removal

The rules are complex

As private companies increasingly take on the public forum’s role, users and businesses operating on these platforms may find their rules increasingly complex. Between curbing the rise of hate speech, disinformation (which sometimes can amount to national security threats), and protecting the fundamental right of expression for their users, platforms find themselves making consequential decisions to operate free-thought-exchanging forums within a profit-driven business model and the grace of new regulatory frameworks that may impede on their ability to expand internationally.

Trust, safety, and accountability are new rules that major platforms have committed to operating by following the Santa Clara principles. With the rise of disinformation and false content on the internet in general and the larger online information ecosystem of which social platforms are a part, trust is the currency with which social platforms and their online communities can thrive. Trust is the evidence users are engaged in a healthy debate online. They trust the information they read and the users they interact with. Accountability is the other side of the same coin. In a competitive, complex world of online spaces and competing ideals, social platforms must ensure they are accountable to their users by providing them with the metrics and the feedback when guideline enforcement mechanisms are applied to protect them from unfettered exposure to harmful online speech and online content.

AI and the first line of defense

The scale of content on modern internet platforms and the promise of social means that we have got to delegate some of the moderation work to algorithms and machines. While humans must stay in the loop, AI content moderation systems leverage the scale of large datasets of classified speech to filter harmful online content. The largeness of platforms like Facebook, Twitter, Tiktok, and YouTube and the hundreds of millions of pieces of user-generated content every day makes it imperative to implement AI systems for content moderation. Still, big and small platforms alike will benefit from these solutions in the long run. AI content moderation systems do not eliminate human moderators’ role, because contextual knowledge and judgment continue to be important. Human moderators also contribute to building up new training data that AI algorithms will act on instead of relying on copies from old AI models.

A particularly robust AI content moderation system that adapts to several languages can help community managers and users in the pre-moderation phase. The AI system can both detect and hide offensive content but also educate users on the community rules they have agreed to before they commit a violation and create a healthier online environment conducive for healthy and constructive conversations and exchange of information.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Trust and Safety Teams: Ensuring User Protection in the Digital World

As the internet becomes an integral part of our daily lives, companies must prioritize the safety and security of their users. This responsibility falls on trust and safety teams, whose primary goal is to protect users from fraud, abuse, and other harmful behavior.  Trust and Safety Teams Objectives  The Role of Trust and Safety Teams
6 minutes

Streamline Audio Moderation with the Power of AI

In today's digitally-driven world, audio content has become an integral part of online platforms, ranging from podcasts and audiobooks to user-generated audio clips on social media. With the increasing volume of audio content being generated daily, audio moderation has become a critical aspect of maintaining a safe and positive user experience. Audio moderation involves systematically
4 minutes

It’s Scale or Fail with AI in Video Moderation

In the digital age, video content has become a driving force across online platforms, shaping the way we communicate, entertain, and share experiences. With this exponential growth, content moderation has become a critical aspect of maintaining a safe and inclusive online environment. The sheer volume of user-generated videos poses significant challenges for platforms, necessitating advanced
4 minutes

Enable and Scale AI for Podcast Moderation

The podcasting industry has experienced an explosive growth in recent years, with millions of episodes being published across various platforms every day. As the volume of audio content surges, ensuring a safe and trustworthy podcast environment becomes a paramount concern. Podcast moderation plays a crucial role in filtering and managing podcast episodes to prevent the
4 minutes

Ready or Not, AI Is Coming to Content Moderation

As digital platforms and online communities continue to grow, content moderation becomes increasingly critical to ensure safe and positive user experiences. Manual content moderation by human moderators is effective but often falls short when dealing with the scale and complexity of user-generated content. Ready or not, AI is coming to content moderation operations, revolutionizing the
5 minutes

How to Protect the Mental Health of Content Moderators? 

Content moderation has become an essential aspect of managing online platforms and ensuring a safe user experience. Behind the scenes, content moderators play a crucial role in reviewing user-generated content, filtering out harmful or inappropriate materials, and upholding community guidelines. However, the task of content moderation is not without its challenges, as it exposes moderators
4 minutes

Scaling Content Moderation Through AI Pays Off, No Matter the Investment

In the rapidly evolving digital landscape, user-generated content has become the lifeblood of online platforms, from social media giants to e-commerce websites. With the surge in content creation, content moderation has become a critical aspect of maintaining a safe and reputable online environment. As the volume of user-generated content continues to grow, manual content moderation
4 minutes

Overhaul Image Moderation with the Power of AI

In today's digital world, visual content plays a significant role in online platforms, ranging from social media to e-commerce websites. With the exponential growth of user-generated images, ensuring a safe and inclusive user experience has become a paramount concern for platform owners. However, image moderation poses unique challenges due to the sheer volume, diverse content,
4 minutes

Outsourcing Content Moderation

Outsourcing content moderation has become an essential aspect of managing online platforms in the digital age. With the exponential growth of user-generated content, businesses are faced with the challenge of maintaining a safe and inclusive environment for their users while protecting their brand reputation. To address this, many companies are turning to outsourcing content moderation
4 minutes

Designing for Trust in 2023: How to Create User-Friendly Designs that Keep Users Safe

The Significance of designing for trust in the Digital World In today's digital landscape, building trust with users is essential for operating a business online. Trust is the foundation of successful user interactions and transactions, it is key to encouraging users to share personal information, make purchases, and interact with website content. Without trust, users
5 minutes

Fake Dating Pictures: A Comprehensive Guide to Identifying and Managing 

In the world of online dating, fake dating pictures are harmful, as pictures play a crucial role in making a strong first impression. However, not all dating pictures are created equal. There is a growing concern about fake profiles using deceptive or doctored images.  To navigate the online dating landscape successfully, it's important to know
5 minutes

17 Questions Trust and Safety Leaders Should Be Able to Answer 

A Trust and Safety leader plays a crucial role in ensuring the safety and security of a platform or community. Here are 17 important questions that a Trust and Safety leader should be able to answer.  What are the key goals and objectives of the Trust and Safety team? The key goals of the Trust
6 minutes

The Future of Dating: Embracing Video to Connect and Thrive

‍In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In
4 minutes

Content Moderation: A Comprehensive Guide

Content moderation is a crucial aspect of managing online platforms and communities. It involves the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. To navigate this landscape effectively, it's essential to understand the terminology associated with content moderation. In this article, we'll delve into a comprehensive glossary
7 minutes

How Predators Are Abusing Generative AI

The recent rise of generative AI has revolutionized various industries, including Trust and Safety. However, this technological advancement generates new problems. Predators have found ways to abuse generative AI, using it to carry out horrible acts such as child sex abuse material (CSAM), disinformation, fraud, and extremism. In this article, we will explore how predators
4 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert