fbpx

UK far-right riots: Trust & Safety solutions for online platforms

UK far-right riots

The far-right riots in the UK

The recent UK far-right riots, undeniably fuelled by misinformation spread on social media platforms serves as a stark reminder of the urgent need for online platforms to adapt their content moderation policies and work closely with regulators to prevent such tragedies. The consequences of inaction and noncompliance are serious, with hate speech and misinformation fuelling real world violence, harassment and marginalization of vulnerable communities.

A recent Reuters analysis revealed that a false claim about the suspect in the Southport riots, which was viewed at least 15.7 million times across various platforms, highlights the alarming speed and scale at which misinformation can spread online. A Channel 4 analysis showed that 49% of traffic on social media platform X referencing “Southport Muslim” originated from the United States, with 30% coming from Britain (Source: Reuters). 

Misinformation: the digital domino effect you definitely don’t want to start

The spread of misinformation on social media has severe consequences, including the perpetuation of hate speech and the incitement of violence. The fact that internet personality Andrew Tate shared a picture of a man he falsely claimed was responsible for the attack, with the caption “straight off the boat”, demonstrates the ease with which misinformation can be disseminated online without fear of consequences to the user. Similarly, the Channel 4 analysis revealed that thousands of people online falsely claimed that a Syrian refugee, who has been in pre-trial detention in France since last June, was responsible for the attack in Southport.

UK far-right-riots: the free speech conundrum

Smaller platforms have become breeding grounds for misinformation and hate speech, struggling to implement effective moderation tools. Unlike social media giants these platforms lack both the technological infrastructure and the financial resources to fight back against harmful content.

To address this challenge, online platforms and regulators will need to work together to hash out the best ways to address potential harm that content can inflict on platforms and determine where regulations are appropriate. Unique tools present distinct dangers, and it’s crucial to adjust recommended methods based on the platform, whether it be messaging apps, social media, or news sources. Also considering the ways in which harmful content can get relayed, and the impact it can have on individuals and communities.

The difficulty lies in finding a middle ground between safeguarding freedom of expression and combating harmful content online. Online platforms must make difficult decisions about where the line between protected speech and hate speech is drawn. The limitations in addressing harmful content have become more evident as seen in the recent UK far-right riots, where false information and hate speech spread like wildfire on social media. 

What Doesn’t Work when battling misinformation 

Experience has taught us that if we focus solely on implementing these trust and safety techniques it will not work. Those techniques include:

  • Relying solely on Human Moderators: Human moderators can be overwhelmed by the volume of content generated and shared every second. A purely manual approach is often too slow to respond to fast spreading misinformation.
  • Delayed Action: Waiting too long to respond to misinformation allows it to gain traction making it more difficult to correct. Once false narratives go viral they are much harder to contain even after the original content is removed.
  • Overly Aggressive Censorship: Stringent removal of content without clear explanation can irritate users and create perceptions of bias or suppression leading to resistance and further spread of false information on alternative platforms.

When implementing policies and trust and safety techniques to help combat misinformation spreading on an online platform, the following are essential factors to take into consideration.

1. Context changes meaning

Context is KING. Understanding the context of content is crucial in determining its potential harm. The meaning of words or phrases can vary depending on the situation, the audience and the intent behind them. Without understanding these nuances moderation decisions can be inconsistent or unfair. A phrase that may seem offensive in one situation could be entirely harmless in another. By considering factors such as regional variations, cultural norms and the intended tone platforms can ensure a balanced approach that minimizes both over-moderation and the spread of harmful material. Understanding the context allows for fairer decisions, ensuring that content is judged accurately and moderation actions are aligned with the actual risks posed. At Checkstep we use LLM scanning to allow customers to create enforcement policies in natural language and to manage exceptions that consider contextual exceptions.

2. Intent vs. Impact

The intent behind a piece of content may not be malicious, but its impact can still be significant. Consider the potential harm that content can cause, even if it is not explicitly intended to be harmful. For instance, someone might use a slur or hate speech with the intention of mocking the use of it, perhaps making a joke about the history or associations of the word. However, even with good intentions, this can backfire. For many, the use of the slur may still trigger distress or reinforce negative feelings, regardless of the original purpose. This is why clear and well-defined content moderation policies are essential. Moderators should assess both the intent and the impact of the content while following transparent guidelines. A structured policy ensures fairness, reduces ambiguity and helps mitigate harm. Rather than focusing solely on removal, clear policies enable platforms to address content that can cause harm while maintaining consistency and safeguarding user expression. At Checkstep, we partner with Trust & Safety leaders to develop clear and effective moderation policies. We’ll work closely with you to understand your specific needs and concerns, and help you create policies that empower your moderators to make informed decisions..

3. Scale, Reach, Virality and Engagement

The scale, reach, virality and engagement rate of online content can amplify its impact. These aspects should also be considered in moderation decisions or actions. Moderation should account for these factors by closely monitoring content that goes viral or has high engagement. By being aware of what’s going viral on and off the platform companies can prepare for content that may spread to their platform allowing for faster intervention and mitigation of harm. In Checkstep’s platform, we keep you informed and in control. We can set up a process where you receive instant email or Slack alerts when you receive community reports, allowing you to take immediate action. We also integrate seamlessly with your existing systems, providing valuable metadata on trending topics to help you understand what’s trending on your platform. 

UK far-right-riots: taking a proactive approach to Trust and Safety

Here are some solutions to make the internet safer and prevent the spread of hate speech and misinformation.

 1. Proactive content detection with AI

In order to address the fast dissemination of false information and harmful speech, online platforms need to invest in sophisticated AI technology that can identify and signal harmful content instantly. This technology is especially useful in times of crisis, as spreading misinformation can be very harmful. Using AI-driven moderation tools, platforms can stop the dissemination of harm and foster a more secure online atmosphere. When it comes to misinformation AI systems don’t just rely on detecting keywords but are also designed to cross-reference content with verified sources of truth, such as trusted news outlets, fact-checking organizations and official databases. By flagging content that contradicts reliable information the AI can identify misleading posts especially during crises when false narratives spread quickly. While AI alone can’t determine the absolute truth it acts as an early warning system flagging potentially harmful content for further review by human moderators. This combined approach helps reduce the spread of misinformation while maintaining accuracy.

While many solutions rely on keywords or searches to identify new trending topics, the advent of large language models (LLMs) allows more flexibility to update your prompts and classify new types of content simply in seconds. 

Checkstep recently launched a feature to be able to create new tags for new violating content themes so that it’s easy to catch emerging issues. With a description of an emerging trend, LLMs let you move beyond keywords and search for themes or intents within your content.

2. Regulatory compliance and collaboration

The ever-evolving regulatory landscape demands that online platforms work closely with regulators, to comply with new regulations due to the constantly changing legal environment. Staying ahead of the curve to address the latest threats to online safety is the name of the game. This partnership is crucial in developing effective strategies to mitigate the spread of harmful content.

If you’re looking for more information on key Trust & Safety regulations, we’ve also put together this Regulations Cheat Sheet, which summarizes all the regulations at a glance and translates all the requirements into the product features you’d need to be compliant.

3. Transparency 

For online platforms based in Europe or the UK, both the Digital Services Act and the Online Services Act impose fairly strict transparency requirements and the need to set up appeals processes. This means, for example, that online platforms should issue statements of reasons, which are notices to users whose content has been banned. It should include information such as the moderation detection method, the reason for the ban, and how to appeal. Building these workflows is key to ensuring that users can appeal a moderation decision and that online platforms can ensure that they’re not over-moderating and restricting free speech. We’ve built such workflows at Checkstep and ensure that all requirements are met, including the connection to the EU database. 

4. Community-driven fact-checking initiatives

Community-based moderation initiatives, where verified users or experts can help in identifying and refuting false information, provide an effective way to combat the dissemination of misinformation & disinformation.Misinformation refers to false information shared without harmful intent like resharing a post without fact-checking. Disinformation on the other hand is deliberately false content meant to deceive. While both can spread quickly not all false information is intentional. It’s all about enhancing the fact-checking process and curbing misinformation before it reaches a stage where it can cause chaos. This approach sharpens content accuracy while turning us all into savvy digital detectives and critical thinkers — no magnifying glass required.

In order to promote trust and engagement within the community, online platforms should make transparency a priority in their moderation practices. Consistently releasing reports on moderation actions and offering easily accessible ways for users to report misinformation, similar to the “Community Notes” feature on X. This feature allows users to add context to potentially misleading posts helping others better understand the content. By enabling users to contribute their insights, platforms can create a more democratic environment where the community collectively helps combat misinformation. Empower users to contribute to the moderation process, let users participate in moderation: turning every comment section into a democracy, one report at a time.

4. Moderation policy reviews and updates

In the face of emerging online threats, it is crucial to periodically reassess and upgrade moderation guidelines. This means laying down the law with crystal-clear rules on harmful content and a heads-up on what happens if you break them. At Checkstep, we help partners keep their moderation guidelines aligned with the latest developments working closely to ensure platforms are prepared to address emerging risks by staying in tune with the latest trends. We always do our best to keep the digital chaos in check and mitigate the spread of harm.

5. Efficient processes and effective tools

With any complex moderation operation, it is crucial to supply Trust and Safety teams with efficient tools and resources. Our moderation policy template offers a comprehensive framework for developing and implementing effective moderation policies. By incorporating Checkstep into your moderation toolkit you can streamline processes including content moderation, reporting and workforce management all in one place with great efficiency. It’s the key to building safer and positive online communities where meaningful conversations can thrive. Learn more at Checkstep.

Learning from UK far-right riots: it’s time for a proactive approach to online safety

After the UK far-right riots and with regulations coming into effect all over the world, online platforms need to buckle up and tackle misinformation and hate speech before it gets out of control. We should think of it as less of a choice and more of a digital emergency. Let’s join forces with regulators, experts and users to build a safer, more respectful online space. After all, the future of online safety is on the line and this isn’t the time to hit the digital snooze button.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Image Moderation Guide: Discover the Power of AI

In today's digital world, visual content plays a significant role in online platforms, ranging from social media to e-commerce websites. With the exponential growth of user-generated images, ensuring a safe and inclusive user experience has become a paramount concern for platform owners. However, image moderation poses unique challenges due to the sheer volume, diverse content,…
4 minutes

How to Build a Trustworthy E-Commerce Brand Using AI-text Moderation

In the fast-paced and competitive world of online commerce, trust is the most important element in ensuring successful transactions, and customer evaluations hold a top spot in the ranking of factors that contribute to the development of brand reliability. They act as a kind of digital word-of-mouth, influencing consumers' choices to make purchases and moulding…
4 minutes

Top 3 Digital Services Act Tools to make your compliance easier

Introduction The Digital Service Act (DSA) is a European regulation amending the June, 8th 2000 Directive on electronic commerce (Directive 2000/31/EC). Its goal is to modernize and harmonize national legislation within the internal market in response to the risks and challenges of digital transformation. The DSA applies to a large range of digital services such…
12 minutes

The Evolution of Content Moderation Rules Throughout The Years

The birth of the digital public sphere This article is contributed by Ahmed Medien. Online forums and social marketplaces have become a large part of the internet in the past 20 years since the early bulletin boards on the internet and AOL chat rooms. Today, users moved primarily to social platforms, platforms that host user-generated content. These…
7 minutes

How AI is Revolutionizing Content Moderation in Social Media Platforms

Social media platforms have become an integral part of our lives, connecting us with friends, family, and the world at large. Still, with the exponential growth of user-generated content, ensuring a safe and positive user experience has become a daunting task. This is where Artificial Intelligence (AI) comes into play, revolutionizing the way social media…
3 minutes

How Video Game Bullying is Threatening the Future of the Industry

Video games have become an integral part of modern entertainment, offering immersive experiences and interactive gameplay. With the rise in popularity of online multiplayer games, a dark side has emerged : video game bullying. This pervasive issue threatens the well-being of players and the reputation of the entire video game industry. In this article, we…
4 minutes

How to Build a Safe Social Media Platform without Sacrificing the User’s Freedom

It was once unthinkable that social media would become an integral aspect of daily life, but here we are, relying on it for communication, information, entertainment, and even shaping our social interactions. It’s brought to our lives a whole new set of rules, and now that online duality is expected, the balance between safety and…
6 minutes

Digital Services Act (DSA) Transparency Guide [+Free Templates]

The Digital Services Act (DSA) is a comprehensive set of laws that aims to regulate digital services and platforms to ensure transparency, accountability, and user protection. In other words, it’s the European Union’s way of regulating and harmonizing separate laws under one universal piece of legislation to prevent illegal and harmful activities online and the…
7 minutes

Trust and Safety Teams: Ensuring User Protection

As the internet becomes an integral part of our daily lives, companies must prioritize the safety and security of their users. This responsibility falls on trust and safety teams, whose primary goal is to protect users from fraud, abuse, and other harmful behavior.  Trust and Safety Teams Objectives  The Role of Trust and Safety Teams…
6 minutes

Building Trust and Safety Online: The Power of AI Content Moderation in Community Forums

Community forums are providing spaces for individuals to connect, share ideas, and build relationships. However, maintaining a safe and welcoming environment in these forums is crucial for fostering trust and ensuring the well-being of community members. To address this challenge, many forums are turning to the power of artificial intelligence (AI) content moderation. In this…
3 minutes

Outsourcing Content Moderation

Outsourcing content moderation has become an essential aspect of managing online platforms in the digital age. With the exponential growth of user-generated content, businesses are faced with the challenge of maintaining a safe and inclusive environment for their users while protecting their brand reputation. To address this, many companies are turning to outsourcing content moderation…
4 minutes

From Trolls to Fair Play: The Transformative Impact of AI Moderation in Gaming

The Online Battlefield The online gaming community, once a haven for enthusiasts to connect and share their passion, has faced the growing challenge of toxic behaviour and harassment. Teenagers and young adults are still the main demographic of players, and as multiplayer games became more popular, so did instances of trolling, hate speech, and other…
4 minutes

Minor protection : 3 updates you should make to comply with DSA provisions

Introduction While the EU already has some rules to protect children online, such as those found in the Audiovisual Media Services Directive, the Digital Services Act (DSA) introduces specific obligations for platforms. As platforms adapt to meet the provisions outlined in the DSA Minor Protection, it's important for businesses to take proactive measures to comply…
5 minutes

The Role of a Content Moderator: Ensuring Safety and Integrity in the Digital World

In today's digital world, the role of a content moderator is central to ensuring the safety and integrity of online platforms. Content moderators are responsible for reviewing and moderating user-generated content to ensure that it complies with the platform's policies and guidelines, and the laws and regulations. Their work is crucial in creating a safe…
5 minutes

Educational Content: Enhancing Online Safety with AI

The internet has revolutionized the field of education, offering new resources and opportunities for learning. With the increased reliance on online platforms and digital content, it is now a priority to ensure the safety and security of educational spaces. This is where artificial intelligence (AI) plays a big role. By using the power of AI,…
3 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert