fbpx

UK far-right riots: Trust & Safety solutions for online platforms

UK far-right riots

The far-right riots in the UK

The recent UK far-right riots, undeniably fuelled by misinformation spread on social media platforms serves as a stark reminder of the urgent need for online platforms to adapt their content moderation policies and work closely with regulators to prevent such tragedies. The consequences of inaction and noncompliance are serious, with hate speech and misinformation fuelling real world violence, harassment and marginalization of vulnerable communities.

A recent Reuters analysis revealed that a false claim about the suspect in the Southport riots, which was viewed at least 15.7 million times across various platforms, highlights the alarming speed and scale at which misinformation can spread online. A Channel 4 analysis showed that 49% of traffic on social media platform X referencing “Southport Muslim” originated from the United States, with 30% coming from Britain (Source: Reuters). 

Misinformation: the digital domino effect you definitely don’t want to start

The spread of misinformation on social media has severe consequences, including the perpetuation of hate speech and the incitement of violence. The fact that internet personality Andrew Tate shared a picture of a man he falsely claimed was responsible for the attack, with the caption “straight off the boat”, demonstrates the ease with which misinformation can be disseminated online without fear of consequences to the user. Similarly, the Channel 4 analysis revealed that thousands of people online falsely claimed that a Syrian refugee, who has been in pre-trial detention in France since last June, was responsible for the attack in Southport.

UK far-right-riots: the free speech conundrum

Smaller platforms have become breeding grounds for misinformation and hate speech, struggling to implement effective moderation tools. Unlike social media giants these platforms lack both the technological infrastructure and the financial resources to fight back against harmful content.

To address this challenge, online platforms and regulators will need to work together to hash out the best ways to address potential harm that content can inflict on platforms and determine where regulations are appropriate. Unique tools present distinct dangers, and it’s crucial to adjust recommended methods based on the platform, whether it be messaging apps, social media, or news sources. Also considering the ways in which harmful content can get relayed, and the impact it can have on individuals and communities.

The difficulty lies in finding a middle ground between safeguarding freedom of expression and combating harmful content online. Online platforms must make difficult decisions about where the line between protected speech and hate speech is drawn. The limitations in addressing harmful content have become more evident as seen in the recent UK far-right riots, where false information and hate speech spread like wildfire on social media. 

What Doesn’t Work when battling misinformation 

Experience has taught us that if we focus solely on implementing these trust and safety techniques it will not work. Those techniques include:

  • Relying solely on Human Moderators: Human moderators can be overwhelmed by the volume of content generated and shared every second. A purely manual approach is often too slow to respond to fast spreading misinformation.
  • Delayed Action: Waiting too long to respond to misinformation allows it to gain traction making it more difficult to correct. Once false narratives go viral they are much harder to contain even after the original content is removed.
  • Overly Aggressive Censorship: Stringent removal of content without clear explanation can irritate users and create perceptions of bias or suppression leading to resistance and further spread of false information on alternative platforms.

When implementing policies and trust and safety techniques to help combat misinformation spreading on an online platform, the following are essential factors to take into consideration.

1. Context changes meaning

Context is KING. Understanding the context of content is crucial in determining its potential harm. The meaning of words or phrases can vary depending on the situation, the audience and the intent behind them. Without understanding these nuances moderation decisions can be inconsistent or unfair. A phrase that may seem offensive in one situation could be entirely harmless in another. By considering factors such as regional variations, cultural norms and the intended tone platforms can ensure a balanced approach that minimizes both over-moderation and the spread of harmful material. Understanding the context allows for fairer decisions, ensuring that content is judged accurately and moderation actions are aligned with the actual risks posed. At Checkstep we use LLM scanning to allow customers to create enforcement policies in natural language and to manage exceptions that consider contextual exceptions.

2. Intent vs. Impact

The intent behind a piece of content may not be malicious, but its impact can still be significant. Consider the potential harm that content can cause, even if it is not explicitly intended to be harmful. For instance, someone might use a slur or hate speech with the intention of mocking the use of it, perhaps making a joke about the history or associations of the word. However, even with good intentions, this can backfire. For many, the use of the slur may still trigger distress or reinforce negative feelings, regardless of the original purpose. This is why clear and well-defined content moderation policies are essential. Moderators should assess both the intent and the impact of the content while following transparent guidelines. A structured policy ensures fairness, reduces ambiguity and helps mitigate harm. Rather than focusing solely on removal, clear policies enable platforms to address content that can cause harm while maintaining consistency and safeguarding user expression. At Checkstep, we partner with Trust & Safety leaders to develop clear and effective moderation policies. We’ll work closely with you to understand your specific needs and concerns, and help you create policies that empower your moderators to make informed decisions..

3. Scale, Reach, Virality and Engagement

The scale, reach, virality and engagement rate of online content can amplify its impact. These aspects should also be considered in moderation decisions or actions. Moderation should account for these factors by closely monitoring content that goes viral or has high engagement. By being aware of what’s going viral on and off the platform companies can prepare for content that may spread to their platform allowing for faster intervention and mitigation of harm. In Checkstep’s platform, we keep you informed and in control. We can set up a process where you receive instant email or Slack alerts when you receive community reports, allowing you to take immediate action. We also integrate seamlessly with your existing systems, providing valuable metadata on trending topics to help you understand what’s trending on your platform. 

UK far-right-riots: taking a proactive approach to Trust and Safety

Here are some solutions to make the internet safer and prevent the spread of hate speech and misinformation.

 1. Proactive content detection with AI

In order to address the fast dissemination of false information and harmful speech, online platforms need to invest in sophisticated AI technology that can identify and signal harmful content instantly. This technology is especially useful in times of crisis, as spreading misinformation can be very harmful. Using AI-driven moderation tools, platforms can stop the dissemination of harm and foster a more secure online atmosphere. When it comes to misinformation AI systems don’t just rely on detecting keywords but are also designed to cross-reference content with verified sources of truth, such as trusted news outlets, fact-checking organizations and official databases. By flagging content that contradicts reliable information the AI can identify misleading posts especially during crises when false narratives spread quickly. While AI alone can’t determine the absolute truth it acts as an early warning system flagging potentially harmful content for further review by human moderators. This combined approach helps reduce the spread of misinformation while maintaining accuracy.

While many solutions rely on keywords or searches to identify new trending topics, the advent of large language models (LLMs) allows more flexibility to update your prompts and classify new types of content simply in seconds. 

Checkstep recently launched a feature to be able to create new tags for new violating content themes so that it’s easy to catch emerging issues. With a description of an emerging trend, LLMs let you move beyond keywords and search for themes or intents within your content.

2. Regulatory compliance and collaboration

The ever-evolving regulatory landscape demands that online platforms work closely with regulators, to comply with new regulations due to the constantly changing legal environment. Staying ahead of the curve to address the latest threats to online safety is the name of the game. This partnership is crucial in developing effective strategies to mitigate the spread of harmful content.

If you’re looking for more information on key Trust & Safety regulations, we’ve also put together this Regulations Cheat Sheet, which summarizes all the regulations at a glance and translates all the requirements into the product features you’d need to be compliant.

3. Transparency 

For online platforms based in Europe or the UK, both the Digital Services Act and the Online Services Act impose fairly strict transparency requirements and the need to set up appeals processes. This means, for example, that online platforms should issue statements of reasons, which are notices to users whose content has been banned. It should include information such as the moderation detection method, the reason for the ban, and how to appeal. Building these workflows is key to ensuring that users can appeal a moderation decision and that online platforms can ensure that they’re not over-moderating and restricting free speech. We’ve built such workflows at Checkstep and ensure that all requirements are met, including the connection to the EU database. 

4. Community-driven fact-checking initiatives

Community-based moderation initiatives, where verified users or experts can help in identifying and refuting false information, provide an effective way to combat the dissemination of misinformation & disinformation.Misinformation refers to false information shared without harmful intent like resharing a post without fact-checking. Disinformation on the other hand is deliberately false content meant to deceive. While both can spread quickly not all false information is intentional. It’s all about enhancing the fact-checking process and curbing misinformation before it reaches a stage where it can cause chaos. This approach sharpens content accuracy while turning us all into savvy digital detectives and critical thinkers — no magnifying glass required.

In order to promote trust and engagement within the community, online platforms should make transparency a priority in their moderation practices. Consistently releasing reports on moderation actions and offering easily accessible ways for users to report misinformation, similar to the “Community Notes” feature on X. This feature allows users to add context to potentially misleading posts helping others better understand the content. By enabling users to contribute their insights, platforms can create a more democratic environment where the community collectively helps combat misinformation. Empower users to contribute to the moderation process, let users participate in moderation: turning every comment section into a democracy, one report at a time.

4. Moderation policy reviews and updates

In the face of emerging online threats, it is crucial to periodically reassess and upgrade moderation guidelines. This means laying down the law with crystal-clear rules on harmful content and a heads-up on what happens if you break them. At Checkstep, we help partners keep their moderation guidelines aligned with the latest developments working closely to ensure platforms are prepared to address emerging risks by staying in tune with the latest trends. We always do our best to keep the digital chaos in check and mitigate the spread of harm.

5. Efficient processes and effective tools

With any complex moderation operation, it is crucial to supply Trust and Safety teams with efficient tools and resources. Our moderation policy template offers a comprehensive framework for developing and implementing effective moderation policies. By incorporating Checkstep into your moderation toolkit you can streamline processes including content moderation, reporting and workforce management all in one place with great efficiency. It’s the key to building safer and positive online communities where meaningful conversations can thrive. Learn more at Checkstep.

Learning from UK far-right riots: it’s time for a proactive approach to online safety

After the UK far-right riots and with regulations coming into effect all over the world, online platforms need to buckle up and tackle misinformation and hate speech before it gets out of control. We should think of it as less of a choice and more of a digital emergency. Let’s join forces with regulators, experts and users to build a safer, more respectful online space. After all, the future of online safety is on the line and this isn’t the time to hit the digital snooze button.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Podcast Moderation at Scale: Leveraging AI to Manage Content

The podcasting industry has experienced an explosive growth in recent years, with millions of episodes being published across various platforms every day. As the volume of audio content surges, ensuring a safe and trustworthy podcast environment becomes a paramount concern. Podcast moderation plays a crucial role in filtering and managing podcast episodes to prevent the…
4 minutes

Content Moderators : How to protect their Mental Health ? 

Content moderation has become an essential aspect of managing online platforms and ensuring a safe user experience. Behind the scenes, content moderators play a crucial role in reviewing user-generated content, filtering out harmful or inappropriate materials, and upholding community guidelines. However, the task of content moderation is not without its challenges, as it exposes moderators…
4 minutes

Text Moderation: Scale your content moderation with AI

In today's interconnected world, text-based communication has become a fundamental part of our daily lives. However, with the exponential growth of user-generated text content on digital platforms, ensuring a safe and inclusive online environment has become a daunting task. Text moderation plays a critical role in filtering and managing user-generated content to prevent harmful or…
4 minutes

Audio Moderation: AI-Driven Strategies to Combat Online Threats

In today's digitally-driven world, audio content has become an integral part of online platforms, ranging from podcasts and audiobooks to user-generated audio clips on social media. With the increasing volume of audio content being generated daily, audio moderation has become a critical aspect of maintaining a safe and positive user experience. Audio moderation involves systematically…
4 minutes

What is Content Moderation ? 

Content moderation is the strategic process of evaluating, filtering, and regulating user-generated content on digital ecosystems. It plays a crucial role in fostering a safe and positive user experience by removing or restricting content that violates community guidelines, is harmful, or could offend users. An effective moderation system is designed to strike a delicate balance…
5 minutes

The Evolution of Content Moderation Rules Throughout The Years

The birth of the digital public sphere This article is contributed by Ahmed Medien. Online forums and social marketplaces have become a large part of the internet in the past 20 years since the early bulletin boards on the internet and AOL chat rooms. Today, users moved primarily to social platforms, platforms that host user-generated content. These…
7 minutes

Video Moderation : It’s Scale or Fail with AI

In the digital age, video content has become a driving force across online platforms, shaping the way we communicate, entertain, and share experiences. With this exponential growth, content moderation has become a critical aspect of maintaining a safe and inclusive online environment. The sheer volume of user-generated videos poses significant challenges for platforms, necessitating advanced…
4 minutes

AI Ethics Expert’s Corner : Kyle Dent, Head of AI Ethics

This month we’ve added a new “Expert’s Corner” feature starting with an interview with our own Kyle Dent, who recently joined Checkstep. He answers questions about AI ethics and some of the challenges of content moderation. AI Ethics FAQ with Kyle Dent If you would like to catch up on other thought leadership pieces by…
4 minutes

Misinformation Expert’s Corner : Preslav Nakov, AI and Fake News

Preslav Nakov has established himself as one of the leading experts on the use of AI against propaganda and disinformation. He has been very influential in the field of natural language processing and text mining, publishing hundreds of peer reviewed research papers. He spoke to us about his work dealing with the ongoing problem of…
8 minutes

Checkstep Raises $1.8M Seed Funding to Combat Online Toxicity

Early stage startup gets funding for R&D effort to develop advanced content moderation technology We’re thrilled to announce that Checkstep recently closed a $1.8m seed funding round to further develop our advanced AI product offering contextual content moderation. The round was carefully selected to be diverse, international, and with a significant added value to our business. Influential personalities…
3 minutes

Expert’s Corner with Checkstep CEO Guillaume Bouchard

This month’s expert is Checkstep’s CEO and Co-Founder Guillaume Bouchard. After exiting his previous company, Bloomsbury AI to Facebook, he’s on a mission to better prepare online platforms against all types of online harm. He has a PhD in applied mathematics and machine learning from INRIA, France. 12 years of scientific research experience at Xerox…
3 minutes

Expert’s Corner with Community Building Expert Todd Nilson

Checkstep interviews expert in online community building Todd Nilson leads transformational technology projects for major brands and organizations. He specializes in online communities, digital workplaces, social listening analysis, competitive intelligence, game thinking, employer branding, and virtual collaboration. Todd has managed teams and engagements with national and global consultancy firms specialized in online communities and the…
7 minutes

Blowing the Whistle on Facebook

Wondering what all the fuss is around the Facebook Papers? Get the lowdown here. A large trove of recently leaked documents from Meta/Facebook promises to keep the social platform in the news, and in hot water, for some time to come. While other recent “Paper” investigations (think Panama and Paradise) have revealed fraud, tax evasion,…
7 minutes

Expert’s Corner with Head of Research Isabelle Augenstein

This month we were very happy to sit down with one of the brains behind Checkstep who is also a recognized talent among European academics. She is the co-head of research at Checkstep and also an associate professor at the University of Copenhagen. She currently holds a prestigious DFF Sapere Aude Research Leader fellowship on ‘Learning to…
5 minutes

Ready or Not, AI Is Coming to Content Moderation

As digital platforms and online communities continue to grow, content moderation becomes increasingly critical to ensure safe and positive user experiences. Manual content moderation by human moderators is effective but often falls short when dealing with the scale and complexity of user-generated content. Ready or not, AI is coming to content moderation operations, revolutionizing the…
5 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert