Yes. We work with a number of Business Process Outsourcing (BPO) companies covering both large-scale customer-service style moderation and more specialised community engagement. As we do not currently accept commission with our moderation partners, we can provide supplier agnostic consultancy to help find the right solution for your needs as necessary.
What is Checkstep?
Checkstep provides all the necessary tools to scale the Trust and Safety operations of online platforms by increasing the effectiveness and well-being of their moderation teams. Designed for multiple roles within the Trust and Safety department, including not only moderators, but also data scientists, head of policy, software engineers, and auditors for online harm compliance.
Where does Checkstep operate?
Checkstep is registered in the UK and operates throughout Europe, North America and Asia. Driving efficiency through remote working and timezone coverage, our 24 employees are structured across Client Solutions (inc Sales & Marketing, Product Development, Design & Engineering, AI/ML Development, Ethics & Policy and Business Operations.)
How much does Checkstep cost?
Our standard pricing model uses a simple matrix of set-up, volume tiers and operator seats. Our premium tier starts at £950 per month for up to 100,000 items and 1 operator seat.
What type of content can be moderated?
All content formats can be scanned and moderated – Text, Image, Audio, Video, URL’s and GIF’s.
What languages do you cover?
We currently process 100 languages natively and we can support every language through translation.
What type of categories can you recognize?
Some categories include: suicide/self-harm, explicit (incl. adult content, nudity), spam, aggressive (incl. violence, visually disturbing), hate (incl. bullying, threat, toxicity), drugs and illicit goods (incl. alcohol and tobacco).
Specialist harms: terrorism, CSAM (Child Sexual Abuse Material), personal identifiable information (PII), intellectual property (IP) infringement, fact-checking etc.
We add, adapt and tailor models to our client’s needs and thresholds and hence the above list is non-exhaustive. Also note that within some of these categories, there are options to tailor into different sub-categories depending on your individual requirements.
Do you offer age verification?
Yes, we offer Age Verification services. We can apply age estimation algorithms and also integrate end-to-end age and/or identity verification flows.
Do you cover live streaming?
Yes, we have live streaming capabilities.
Do you have developer docs?
Yes, developer documentation can be found here
Is the data ethically sourced?
Yes, our data is ethically sourced from Kaggle competitions and public datasets.
How can Checkstep help us remain DSA compliant?
We can help with preparing for the DSA and OSB implementation, whilst remaining consistent with data protection legislation. We can work directly with your governance team if required to run gap analysis and formulate a risk profile comfortable to your individual business. We will then help drive the implementation of the strategies through Checkstep tooling, your current current stack integrations and overall trust and safety operation to streamline legislative requirements and drive down compliance costs.
Does your tool allow for user flagging?
Yes – Checkstep has a User Reporting feature where users can report content and the content will be scanned by Checkstep, then the platform can decide whether to automatically action the content or send it for human review.
Additionally, Checkstep also allows to scan User Profiles for certain violations. This is done by de-composing a profile into it’s different elements (profile picture, URLs, bio, PII, …) and scan each component and then aggregate this into a user-profile detection strategy.
How does your risk level classification system work?
The client decides on the risk thresholds using the harm category and the AI scores. Checkstep can provide guidance. These risk thresholds can be adjusted in the moderation interface.
Do you support flexible workflows?
Yes, we fully support a mix of:
– Different queues can be set-up based on different detection policies, regions, regulations;
– Escalations within the platform to ensure content can get seen quickly by the right team;
– Different queues can be manned by different teams of moderators;
– Queues can be ranked according to needs: i.e. first-in-first-out, minimising SLAs, optimising for severe harms etc.
Can you offer human moderation resources?