fbpx

The Evolution of Content Moderation Rules Throughout The Years

Content Moderation

The birth of the digital public sphere

This article is contributed by Ahmed Medien.

Online forums and social marketplaces have become a large part of the internet in the past 20 years since the early bulletin boards on the internet and AOL chat rooms. Today, users moved primarily to social platforms, platforms that host user-generated content. These platforms comprise the new online public squares where ideas and information are exchanged and debated by anyone. Platforms which offer free services to their users decide on the rules of engagement, and as is the norm, these rules have changed and evolved throughout the past decade and a half. From little to no moderation, platforms have introduced algorithms and guidelines that have shaped public conversations throughout the world (e.g., ethnic violence in Myanmar, BREXIT, the “Stop the Steal” campaign) by their successful implementation or lack thereof. This post summarizes the evolution of content moderation rules and community guidelines of four popular international platforms: Facebook, Twitter, Instagram, and YouTube.

The Timeline of Content Moderation

2000–2009: Launch of social platforms

2004: Facebook launches for university students at select US schools

2005: YouTube launches

2007: Facebook becomes available to the world

2009: Major platforms apply PhotoDNA technology to flag and remove child pornography image (CSAM) online

2009: YouTube and Facebook have become global social platforms, experience blocking in at least 13 countries around the world

2010–2019: Launch of standardized community guidelines

2010: Instagram launches

2010–2011: Social media platforms such as Facebook, Twitter, and YouTube play a major role in carrying the voices and reporting from regional protests in North Africa and the Middle East known as the Arab Spring

2010: Facebook releases its first set of Community Standards in English, French, and Spanish

2011: YouTube makes an exception to allow violent videos from the Middle East if they are educational, documentary, or scientific in nature in response to activists in Egypt and Libya exposing police torture

2012: Facebook acquires Instagram

2012: Twitter launches its first transparency report

2012: YouTube removes, blocks “Innocence of Muslims” video in several Muslim countries

2012: Twitter institutes “country withheld tweet” policy (soon after blocked content in Russia and Pakistan)

2012: Documents from Facebook’s content moderation offices leaked for the first time (Gawker)

2013: Facebook launches its first content moderation transparency report

2010–2019 Launch of standardized community guidelines: Misinformation, terror-linked and organized hate content implodes online

2014: ISIS, terror-linked content, and online radicalization become a major issue on social platforms in several countries

2014: The beheading video of American Journalist James Foley appears online amidst a big wave of terror-linked content

2014: YouTube reverses its policy on allowing certain violent videos

2014: Platforms apply a new rule against ¨Dangerous Organizations¨ linked to terrorism

2015–2016: Twitter changes its content moderation rules on harassment after a remarkable harassment case against the stars of the rebooted Ghostbusters

2016: Major platforms are under amateur and state actor-linked information manipulation campaigns during the 2016 US presidential election

2016: Facebook launches a fact-checking program on its platforms and partners with IFCN fact-checking organizations

2016: Platforms fail to stop campaigns of misinformation that spur ethnic violence in Myanmar against the Rohingya minority

2017–2018: Launch of the Global Internet Forum to Counter Terrorism (GIFCT)

2016: FB live starts to attract an increasing number of suicides and live shootings .

2018: Video of the shooting of Philando Castile in the USA is broadcast live on Facebook.

2018: TikTok launches in China

2018: Twitter removes 70 million bot accounts to curb the influence of political misinformation on its platform

2018: YouTube releases its first transparency (enforcement of community guidelines) report

2018: Facebook forms an Oversight Board to rule over restoring removed content

2018: Facebook allows its users to appeal its decisions to remove certain content

2019: Christchurch terrorist attack (originally broadcast live on Facebook Live) leads to the Christchurch Call to (eliminate terrorist content)

2019: Twitter allows its users to appeal its content removal decision

2019: TikTok launches internationally and attracts a new growth of international audiences

2019: The first emergence of the novel coronavirus in China

2020-Present: New rules to moderate content online expand internationally to counter an ever international phenomena of hate speech, election misinformation, and health misinformation

2020-Q1: Novel coronavirus known as COVID-19 spreads around the world; COVID-19 misinformation soon follows on major social platforms

Major social platforms intervene to ban health information that contradicts government and official sources

Major platforms start to label COVID-19 related misinformation at scale

2020 (Q2+Q3): Facebook Oversight Board chooses its first cases

Twitter and Facebook start labeling the posts of US President Donald Trump

Facebook introduces a slew of new content policy shifts that address:

  • Holocaust denial content
  • US-based organizations that promote hate
  • Organized militia groups
  • Conspiracy theories

Major platforms introduce new vetted content around the topic of election integrity in the US after repeated claims of voter fraud

Twitter launches transparency center

2020-Q4: Misinformation around election integrity and fraud intensifies in the US promoted by the US President Donald Trump

Major platforms start fact-checking posts by the US president and other similar users

Facebook starts labeling content in India

Platforms introduce new rules to counter Covid-19 vaccine misinformation

2021-Q1: Jan 6, Facebook suspends the account of US President Donald Trump for violating its community guidelines and inciting violence

Jan 6, Twitter deletes some tweets of US President Donald Trump and locks his account

Jan 8, Facebook, Twitter, YouTube, Instagram ban the account of US President Donald Trump for the remainder of his term in office, block his posts from other accounts

Facebook oversight boards make first rulings on the 6 cases they selected

Facebook Oversight Board announces it will rule on the ban of Donald J. Trump

Facebook amends its groups’ moderation rules, adds new grounds for removal

The rules of Content Moderation are complex

As private companies increasingly take on the public forum’s role, users and businesses operating on these platforms may find their rules increasingly complex. Between curbing the rise of hate speech, disinformation (which sometimes can amount to national security threats), and protecting the fundamental right of expression for their users, platforms find themselves making consequential decisions to operate free-thought-exchanging forums within a profit-driven business model and the grace of new regulatory frameworks that may impede on their ability to expand internationally.

Trust, safety, and accountability are new rules that major platforms have committed to operating by following the Santa Clara principles. With the rise of disinformation and false content on the internet in general and the larger online information ecosystem of which social platforms are a part, trust is the currency with which social platforms and their online communities can thrive. Trust is the evidence users are engaged in a healthy debate online. They trust the information they read and the users they interact with. Accountability is the other side of the same coin. In a competitive, complex world of online spaces and competing ideals, social platforms must ensure they are accountable to their users by providing them with the metrics and the feedback when guideline enforcement mechanisms are applied to protect them from unfettered exposure to harmful online speech and online content.

AI and the first line of defense for Content Moderation

The scale of content moderation on modern internet platforms and the promise of social means that we have got to delegate some of the content moderation work to algorithms and machines. While humans must stay in the loop, AI content moderation systems leverage the scale of large datasets of classified speech to filter harmful online content. The largeness of platforms like Facebook, Twitter, Tiktok, and YouTube and the hundreds of millions of pieces of user-generated content every day makes it imperative to implement AI systems for content moderation. Still, big and small platforms alike will benefit from these solutions in the long run. AI content moderation systems do not eliminate human moderators’ role, because contextual knowledge and judgment continue to be important. Human moderators also contribute to building up new training data that AI algorithms will act on instead of relying on copies from old AI models.

A particularly robust AI content moderation system that adapts to several languages can help community managers and users in the pre-moderation phase. The AI system can both detect and hide offensive content but also educate users on the community rules they have agreed to before they commit a violation and create a healthier online environment conducive for healthy and constructive conversations and exchange of information.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Global Perspective : How AI Content Moderation Differs Across Cultures and Religion

The internet serves as a vast platform for the exchange of ideas, information, and opinions. However, this free exchange also brings challenges, including the need for content moderation to ensure that online spaces remain safe and respectful. As artificial intelligence (AI) increasingly plays a role in content moderation, it becomes essential to recognize the cultural…
5 minutes

The Effects of Unregulated Content for Gen Z

The Internet as an Irreplaceable Tool Gen Z’s are the first generation to be born in a world where the internet plays an irreplaceable role, and in some way, these children and adolescents are not just consumers but have become inhabitants of the digital society. Apart from school, generation Z spends most of their time…
5 minutes

Live Chat Content Moderation Guide

During any live streaming nowadays, whether it be a content creator on Youtube, an influencer on Instagram, or even live sports in some cases, there's always some sort of live chat. These are public commentary sections where viewers can interact and share their thoughts and opinions, but depending on which event or what sort of…
6 minutes

Content Moderation for Virtual Reality

What is content moderation in virtual reality? Content moderation in virtual reality (VR) is the process of monitoring and managing user-generated content within VR platforms to make sure it meets certain standards and guidelines. This can include text, images, videos, and any actions within the 3D virtual environment. Given the interactive and immersive nature of…
31 minutes

The Impact of Trust and Safety in Marketplaces

Nowadays, its no surprise that an unregulated marketplace with sketchy profiles, violent interactions, scams, and illegal products is doomed to fail. In the current world of online commerce, trust and safety are essential, and if users don't feel comfortable, they won’t buy. As a marketplace owner, ensuring that your platform is a safe and reliable…
9 minutes

How AI is Revolutionizing Content Moderation in Social Media Platforms

Social media platforms have become an integral part of our lives, connecting us with friends, family, and the world at large. Still, with the exponential growth of user-generated content, ensuring a safe and positive user experience has become a daunting task. This is where Artificial Intelligence (AI) comes into play, revolutionizing the way social media…
3 minutes

Customizing AI Content Moderation for Different Industries and Platforms

With the exponential growth of user-generated content across various industries and platforms, the need for effective and tailored content moderation solutions has never been more apparent. Artificial Intelligence (AI) plays a major role in automating content moderation processes, but customization is key to address the unique challenges faced by different industries and platforms. Understanding Industry-Specific…
3 minutes

Emerging Threats in AI Content Moderation : Deep Learning and Contextual Analysis 

With the rise of user-generated content across various platforms, artificial intelligence (AI) has played a crucial role in automating the moderation process. However, as AI algorithms become more sophisticated, emerging threats in content moderation are also on the horizon. This article explores two significant challenges: the use of deep learning and contextual analysis in AI…
4 minutes

The Impact of AI Content Moderation on User Experience and Engagement

User experience and user engagement are two critical metrics that businesses closely monitor to understand how their products, services, or systems are being received by customers. Now that user-generated content (UGC) is on the rise, content moderation plays a main role in ensuring a safe and positive user experience. Artificial intelligence (AI) has emerged as…
4 minutes

Future Technologies : The Next Generation of AI in Content Moderation 

With the exponential growth of user-generated content on various platforms, the task of ensuring a safe and compliant online environment has become increasingly complex. As we look toward the future, emerging technologies, particularly in the field of artificial intelligence (AI), are poised to revolutionize content moderation and usher in a new era of efficiency and…
3 minutes

What is Content Moderation: a Guide

Content moderation is one of the major aspect of managing online platforms and communities. It englobes the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. In this article, we'll provide you with a comprehensive glossary to understand the key concepts, as well as its definition, challenges and…
15 minutes

A Guide to Detect Fake User Accounts

Online social media platforms have become an major part of our daily lives: with the ability to send messages, share files, and connect with others, these networks provide a way, for us users, to stay connected. Those platforms are dealing with a rise of fake accounts and online fraudster making maintaining the security of their…
4 minutes

17 Questions Trust and Safety Leaders Should Be Able to Answer 

A Trust and Safety leader plays a crucial role in ensuring the safety and security of a platform or community. Here are 17 important questions that a Trust and Safety leader should be able to answer.  What are the key goals and objectives of the Trust and Safety team? The key goals of the Trust…
6 minutes

Ensuring Child Safety Online: The Role of Trust & Safety Teams

Children are now growing up with technology as an integral part of their lives. With the increase of smartphones, tablets, and internet-connected devices, it is important for parents, educators, and technology companies to prioritize children's online safety. This shared responsibility requires collaboration, best practices, and strategies to create a secure and user-friendly virtual environment. By…
5 minutes

‍The Future of Dating: Embracing Video to Connect and Thrive

In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In…
4 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert