We research and build technologies executing human-like decisions.

We focus on Online Harm Detection, Explainability, Online Learning and AI Fairness. Being thought leader in these domains enables CheckStep to provide a unique set of capabilities that enable online platform to take control of their content.

Online Harm Detection

Our team is world leading in studying methods that can automatically detect deceptive and harmful content of various categories, including mis- and dis-information, propaganda and hate speech. Our technology uses contextual information, such as user historical contributions to improve its decision accuracy.

Explainable Deep Learning

Moderators must by law give a reason for removing inapropriate user content. This can be time-consuming, inconsistent and biased when explanations are given by human moderators while machine lack contextual understanding. CheckStep research in Explanable AI, or XAI, enables human moderators to be much faster and consistent.

Few-Shot Learning

Content Moderation requires near real-time responses to threats, such as the removal of fake news, hate messages, and disturbing live content. Updating machine learning models cannot be handled with a fixed set of classifiers. Instead, CheckStep developed adaptive learning techniques, such as to domain adaptation and few-shot learning. CheckStep is leading the way in the development and evaluations of online learning for the detection of online harm, including hate speech, disinformation and propaganda.

Get started with Checkstep

Request a demo