The Checkstep August Update

The pace of change in Trust & Safety isn’t slowing down. In the past six months alone, we’ve seen new compliance obligations land in the EU and UK, AI-generated threats become more complex, and budget pressure mount across moderation teams. That’s why we’ve been heads-down shipping updates to help platforms stay compliant, reduce operational costs, and respond faster – without compromising safety.

We know that tech leaders are being asked to do more with less, and the buzz around AI (much of it justified) adds fresh pressure to modernise. If you’re navigating these challenges, here’s a look at some of the latest capabilities in Checkstep that might help your team stay ahead.

What’s new?

  • Automation that works your queues like a virtual moderator –  Add to your workforce with advanced LLM bots that understand your full policy and have them take decisions on flagged content. With every decision, you’ll get a breakdown of the virtual moderator’s judgement and the policy violated including quotes from the policy. Best of all, if you change your policy, your agent gets updated immediately (with no extra training or configuration).
  • As you see examples of new harm, send them to your AI to learn from in seconds – easily develop fine-tuning data for your models directly from reviewed content. Take action immediately by sending examples to your models and have them adapt based on your content examples.

Catch your worst users, not just the content they produce

  • Investigate and Drill-down on specific users with a click – Good moderation means being proactive to nudge user behavior and punish habitual offenders that damage your community. Aggregate all user behavior into one place and review it in a single pane. Take action to ban or suspend users based on your review.
  • Find users with repeat offenses easily – Filter user investigations to target the people with the most violations, the most borderline actions, or new users to monitor your biggest risk areas. Save time hunting for abusers on your platform.

Large Team Security and Management Features

  • Enterprise SSO – Increase security and user management features for your organizing by leveraging single-sign on with Checkstep. We’re fully integrated with the Microsoft Identity Platform, Okta, Google Login, Federate, and we can support any OIDC enabled sign-on system.
  • Deep Control Over Moderator Group Access and Permissions – If you have a large moderator team, managing groups or skills assigned to individuals on the team is a snap. Create groups and give them permissions to view different queues, take different actions, or see different parts of your policy. Fully customize your moderation team permissions.

Multi-lingual Integrations Out-of-the-Box

  • Easily customize policies by geography or content type – More countries are introducing specific policy rules around online content. Checkstep recently launched more tagging features to give customers the ability to set up different policies and rules in individual countries to make it easier to ensure you are compliant with local requirements, wherever you’re doing business.
  • Localized policies for transparency to all customers – For transparency, it’s critical to keep records of policy changes over time. Checkstep now captures all changes across any of your supported languages. 
  • Community Report and Notice of Action Appeal flows in local languages – We’ve launched full-localization for appeal and community report workflows so any Checkstep customer can give their end users a localized experience with a single integration. The process for assessing and reviewing these messages is seamlessly integrated into your workflow.

Video Moderation Made Easy

  • Automatic transcription and translation on all videos – Videos longer than 30-60 seconds are a pain to moderate, particularly if you’re not sure where in the video the potential issue was identified. Checkstep launched automatic transcription and translation to make it easier to identify harmful dialogue.
  • Flag timestamps for review to watch the most critical parts of a video – To save time during review, jumping to harmful sections quickly and viewing the AI flags for key types of harm at specific moments makes it easy to moderate a ten minute video in seconds – no hunting for where the issues are!
  • Up to six hours of scanning in a single video – Whether your video content is short or long, you can get deep scanning without any complex integrations. Checkstep now supports videos up to six hours in length.

And much more…

Interested to see some of these new features in action? Schedule a reconnect with the Checkstep team here.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

The Role of a Content Moderator: Ensuring Safety and Integrity in the Digital World

In today's digital world, the role of a content moderator is central to ensuring the safety and integrity of online platforms. Content moderators are responsible for reviewing and moderating user-generated content to ensure that it complies with the platform's policies and guidelines, and the laws and regulations. Their work is crucial in creating a safe…
5 minutes

How to Launch a Successful Career in Trust and Safety‍

Before diving into the specifics of launching a career in Trust and Safety, it's important to have a clear understanding of what this field entails. Trust and Safety professionals are responsible for maintaining a safe and secure environment for users on digital platforms. This includes identifying and addressing harmful content, developing policies to prevent abuse,…
5 minutes

Ethical Consideration in AI Content Moderation : Avoiding Censorship and Biais

Artificial Intelligence has revolutionized various aspects of our lives, including content moderation on online platforms. As the volume of digital content continues to grow exponentially, AI algorithms play a crucial role in filtering and managing this content. However, with great power comes great responsibility, and the ethical considerations surrounding AI content moderation are becoming increasingly…
3 minutes

The Impact of AI Content Moderation on User Experience and Engagement

User experience and user engagement are two critical metrics that businesses closely monitor to understand how their products, services, or systems are being received by customers. Now that user-generated content (UGC) is on the rise, content moderation plays a main role in ensuring a safe and positive user experience. Artificial intelligence (AI) has emerged as…
4 minutes

Customizing AI Content Moderation for Different Industries and Platforms

With the exponential growth of user-generated content across various industries and platforms, the need for effective and tailored content moderation solutions has never been more apparent. Artificial Intelligence (AI) plays a major role in automating content moderation processes, but customization is key to address the unique challenges faced by different industries and platforms. Understanding Industry-Specific…
3 minutes

TikTok DSA Statement of Reasons (SOR) Statistics

What can we learn from TikTok Statements of Reasons? Body shaming, hypersexualisation, the spread of fake news and misinformation, and the glorification of violence are a high risk on any kind of Social Network. TikTok is one of the fastest growing between 2020 and 2023 and has million of content uploaded everyday on its platform.…
10 minutes

The Importance of Scalability in AI Content Moderation

Content moderation is essential to maintain a safe and positive online environment. With the exponential growth of user-generated content on various platforms, the need for scalable solutions has become crucial. Artificial Intelligence (AI) has emerged as a powerful tool in content moderation but addressing scalability is still a challenge. In this article, we will explore…
3 minutes

3 Facts you Need to Know about Content Moderation and Dating Going into 2024

What is Content Moderation? Content moderation is the practice of monitoring and managing user-generated content on digital platforms to ensure it complies with community guidelines, legal standards, and ethical norms. This process aims to create a safe and inclusive online environment by preventing the spread of harmful, offensive, or inappropriate content. The rise of social…
6 minutes

Building Safer Dating Platforms with AI Moderation

It is not that long since online dating was considered taboo, something that people rarely talked about. But in the last decade it has changed hugely.  Now the global dating scene is clearly flourishing. Over 380 million people worldwide are estimated to use dating platforms, and it is an industry with an annualised revenue in…
4 minutes

How to Respond Faster to Crises with Self-Serve Queues

On May 26th, what began as a moment of celebration for Liverpool FC fans turned tragic when a car drove through the club’s Premier League victory parade on Water Street injuring 79 people including four children. As the news came out, videos and posts of eyewitnesses flooded social media. Moments like these bring more than…
4 minutes

The DSA & OSA Enforcement Playbook: Understanding Penalty Deadlines and Platform Implications

Legal Disclaimer This handbook is provided for informational purposes only and does not constitute legal advice. While efforts have been made to ensure accuracy, regulatory requirements may evolve. Readers should consult with qualified legal counsel to assess specific obligations under the Digital Services Act (DSA) and the Online Safety Act (OSA). 1. The Converging Regulatory…
15 minutes

Lie of the Year: Insurrection Denials Claim Top Spot

“The efforts to downplay and deny what happened are an attempt to brazenly recast reality itself.” After twelve months of hard work debunking hundreds of misleading and false claims, the good folks at Poynter Institute’s PolitiFact take a moment out of their normal schedule to award the Lie of the Year, and for 2021 that dubious…
3 minutes

Misinformation could decide the US Presidential Election

 In 1993 The United Nations declared May 3 as World Press Freedom Day recognizing that a free press is critical to a healthy, functioning democracy. According to the UN, media freedom and individual access to information is key to empowering people with control over their own lives. It seems the global population mostly believes it…
5 minutes

Content Moderation Using ChatGPT

In 10 minutes, you’ll learn how to use ChatGPT for content moderation across spam and hate speech. Who is this for? If you are in a technical role, and work at a company that has user generated content (UGC) then read on. We will show you how you can easily create content moderation models to…
11 minutes

What is Trust and Safety: a Guide

The rapid expansion of online platforms and services has transformed the way we connect, communicate, and conduct business. As more interactions and transactions move into virtual spaces, the concept of trust and safety has become essential. Trust and safety covers a range of strategies, policies, and technologies designed to create secure, reliable, and positive online…
10 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert