The pace of change in Trust & Safety isn’t slowing down. In the past six months alone, we’ve seen new compliance obligations land in the EU and UK, AI-generated threats become more complex, and budget pressure mount across moderation teams. That’s why we’ve been heads-down shipping updates to help platforms stay compliant, reduce operational costs, and respond faster – without compromising safety.
We know that tech leaders are being asked to do more with less, and the buzz around AI (much of it justified) adds fresh pressure to modernise. If you’re navigating these challenges, here’s a look at some of the latest capabilities in Checkstep that might help your team stay ahead.
What’s new?
Automation that’s actually easy – and adaptable to new trends or use cases
- Automation that works your queues like a virtual moderator – Add to your workforce with advanced LLM bots that understand your full policy and have them take decisions on flagged content. With every decision, you’ll get a breakdown of the virtual moderator’s judgement and the policy violated including quotes from the policy. Best of all, if you change your policy, your agent gets updated immediately (with no extra training or configuration).
- As you see examples of new harm, send them to your AI to learn from in seconds – easily develop fine-tuning data for your models directly from reviewed content. Take action immediately by sending examples to your models and have them adapt based on your content examples.
Catch your worst users, not just the content they produce
- Investigate and Drill-down on specific users with a click – Good moderation means being proactive to nudge user behavior and punish habitual offenders that damage your community. Aggregate all user behavior into one place and review it in a single pane. Take action to ban or suspend users based on your review.
- Find users with repeat offenses easily – Filter user investigations to target the people with the most violations, the most borderline actions, or new users to monitor your biggest risk areas. Save time hunting for abusers on your platform.
Large Team Security and Management Features
- Enterprise SSO – Increase security and user management features for your organizing by leveraging single-sign on with Checkstep. We’re fully integrated with the Microsoft Identity Platform, Okta, Google Login, Federate, and we can support any OIDC enabled sign-on system.
- Deep Control Over Moderator Group Access and Permissions – If you have a large moderator team, managing groups or skills assigned to individuals on the team is a snap. Create groups and give them permissions to view different queues, take different actions, or see different parts of your policy. Fully customize your moderation team permissions.
Multi-lingual Integrations Out-of-the-Box
- Easily customize policies by geography or content type – More countries are introducing specific policy rules around online content. Checkstep recently launched more tagging features to give customers the ability to set up different policies and rules in individual countries to make it easier to ensure you are compliant with local requirements, wherever you’re doing business.
- Localized policies for transparency to all customers – For transparency, it’s critical to keep records of policy changes over time. Checkstep now captures all changes across any of your supported languages.
- Community Report and Notice of Action Appeal flows in local languages – We’ve launched full-localization for appeal and community report workflows so any Checkstep customer can give their end users a localized experience with a single integration. The process for assessing and reviewing these messages is seamlessly integrated into your workflow.
Video Moderation Made Easy
- Automatic transcription and translation on all videos – Videos longer than 30-60 seconds are a pain to moderate, particularly if you’re not sure where in the video the potential issue was identified. Checkstep launched automatic transcription and translation to make it easier to identify harmful dialogue.
- Flag timestamps for review to watch the most critical parts of a video – To save time during review, jumping to harmful sections quickly and viewing the AI flags for key types of harm at specific moments makes it easy to moderate a ten minute video in seconds – no hunting for where the issues are!
- Up to six hours of scanning in a single video – Whether your video content is short or long, you can get deep scanning without any complex integrations. Checkstep now supports videos up to six hours in length.
And much more…
Interested to see some of these new features in action? Schedule a reconnect with the Checkstep team here.