Scaling real-time chat moderation for Star Stable, the online horse video game
Creating a safe, positive, and inclusive environment is fundamental to Star Stable’s mission. It invests heavily in Trust and Safety initiatives - including Checkstep's AI content moderation platform - continuously evolving its approach to moderation as its community grows.
About Star Stable
Star Stable Online is one of the world’s largest online horse adventure video games. Built around story-driven gameplay, exploration, and community, Star Stable is designed to empower players - particularly girls and young women - through immersive narratives and social interaction.
The game supports a large, global player base across multiple age groups, with in-game chat playing a central role in how players connect. Every month, Star Stable processes tens of millions of chat messages across global, group, and one-to-one conversations, spanning 14 languages.
How to moderate high-volume, real-time chat without slowing gameplay or falling behind evolving language?
As Star Stable’s player base expanded, its existing moderation approach struggled to keep pace with both scale and speed. Harmful content was occasionally missed, surfacing later in screenshots and community forums.
A key challenge was the rapid evolution of player language. Younger audiences frequently invent new slang and coded terms, often in response to moderation rules. Entire communities also communicate off-platform in tools like Discord, accelerating linguistic drift even further. As a result, static keyword-based systems implemented when the game first came to market quickly became outdated.
Operationally, moderation updates weren’t as quick as desired. When new harms or cultural references emerged, it could take weeks - or longer - for models to be retrained and deployed, leaving the team feeling reactive rather than in control of shaping their community.
Multilingual support added further complexity. The previous setup relied on separate models per language, requiring duplicated effort across the 14 primary languages most frequently used by the community, and leading to the manual labelling of thousands of messages per week, which were often at risk of being considered ‘stale’ by the time they were reviewed.
Finally, traditional LLM-based approaches proved unsuitable for live chat as they introduced delays that were noticeable in-game. Star Stable needs moderation decisions in tens of milliseconds to ensure harmful content never appeares on-screen.
The BRIEF
What Star Stable was looking for from a Trust & Safety Partner
Recognising these challenges, Star Stable set out to find a solution that could support real-time chat at scale, while remaining flexible enough to adapt to fast-changing player behaviour.
Low-latency
Moderate tens of millions of chat messages per month with ultra-low latency
Adaptive
Adapt quickly to evolving slang, coded language and emerging harms.
Global
Support 14 languages without maintaining separate models.
Contextual
Incorporate conversational context between multiple users, not just individual messages.
Self-sufficient
Reduce reliance on large external labelling teams.
Real-time
Give internal teams direct control over real-time policy updates.
THE SOLUTION
A real-time, context-aware approach to chat moderation
Star Stable partnered with Checkstep, a leader in AI-driven content detection, following extensive experimentation, and proof of concept testing that expanded over a year. The resulting solution was purpose-built for high-volume gaming chat, giving Star Stable the confidence to deploy an AI-forward strategy beneath the surface - supported by human-in-the-loop oversight - to maintain a healthy community.

-
Ultra-fast moderation using foundational models
Instead of relying on LLM inference, Checkstep deployed foundational models capable of making moderation decisions in sub-50 milliseconds.
This ensured chat moderation could run invisibly in the background without disrupting gameplay, with harmful chat messages never even noticeably appearing on-screen.
-
Example-driven policy updates
Rather than retraining models on massive datasets, Checkstep enables policy updates using small, targeted example sets.
Moderators can pull real chat messages into the platform, label them, and immediately influence system behaviour, with updates being live in less than a minute- removing weeks-long update cycles.
-
Contextual chat analysis
Checkstep worked with Star Stable to introduce contextual scoring, allowing messages to be assessed both individually and within an evolving conversation.
This made it possible to identify harms that only emerge across multiple messages, which has historically been a challenge to identify.
-
Multilingual moderation with a single model
With support for 100+ languages, the same model now powers moderation across all 14 of Star Stable’s primary languages.
Improvements in one language can benefit others, dramatically reducing operational overhead.
-
Integrated investigations and workflows
Beyond automation, Star Stable adopted Checkstep’s investigation tools and chat-optimised UI.
This includes list-based case views designed for reviewing dense chat logs, expanding Checkstep’s role across their broader trust & safety operations.
Onboarding with Checkstep is quick and simple.
Checkstep's quick time to value enabled Star Stable to get up to speed with AI-powered content moderation quickly.
Getting started with the Checkstep platform was even easier than expected - a pain free launch!
Senior Producer & Localisation Lead
THE IMPACT
Faster decisions, higher accuracy, and greater control
Since deploying Checkstep, Star Stable has significantly improved the effectiveness and responsiveness of its chat moderation, and efficiency of its human-driven resources.
By combining speed, flexibility, and context-aware moderation, Checkstep has enabled Star Stable to better protect its vibrant global community - without compromising the real-time player experience that has defined the game for more than a decade.
More accurate
~50% improvement in accuracy at launch, with continued gains over time
Faster decisions
Sub-50 ms moderation decisions, suitable for live in-game chat
Quicker policy updates
Near-instant policy updates, replacing weeks-long update cycles
Simpler operations
Simplified multilingual operations across 14 languages
Greater autonomy
Greater autonomy for internal teams, reducing dependence on external labelling providers
Integrated moderation
A balanced moderation strategy, integrating AI with human intervention, enables improved decision-making
Want to see our AI content moderation platform for yourself?
Book a demo to see how it can help you deliver safer, more inclusive content at scale.
Trusted by global leaders in trust and safety