Having been part of four Trust & Safety startups throughout my career, typically in pre-sales or customer-success roles, I’ve often found the greatest challenge in “closing the deal” isn’t technical at all. It’s regaining the trust of prospective customers who have been let down by unmet promises from prior relationships.
In this industry, it’s tempting to claim you can solve every problem. After all, most of us build Trust & Safety tooling because we want to tackle the challenges no one has yet conquered. But the reality is more complex. Once contracts are signed, true transformation depends on cross-functional collaboration - product, data science, and engineering teams the buyer likely never met during the “wining and dining” sales process. Months later, the “golden ticket” solution often reveals a few tarnished edges.
Vendor vs. Partner: Knowing the Difference
When speaking with an unsatisfied prospect, I often begin by asking: “When you had these conversations with X company, were you looking for a vendor or a partner?”
Responses usually fall into two camps: confusion (“What’s the difference?”) or resignation (“I wanted a partner, but I settled for a vendor”).
A vendor is like a vending machine: straightforward, transactional, and one-way. You select your item, pay, and hope the product meets expectations. If it doesn’t, there’s no one to fix it. A partner, by contrast, is like a restaurant. They ask if you’re celebrating, learn your preferences, and check in to ensure satisfaction. Great restaurants - and great partners - create personalized, two-way relationships that delight because they listen and craft an experience unique to your needs.
SaaS procurement isn’t so different. Some tools are “out-of-the-box” and static; others are “bespoke” or “custom,” with providers seeking design partners who co-create solutions for evolving needs.
How to Identify a Partner Mindset
The challenge is discerning whether your potential provider truly thinks like a partner. Sales teams highlight strengths, not shortcomings, so it’s up to you to probe deeper.
Below is a list of ten key questions tailored for Trust & Safety buyers evaluating AI-driven detection tooling in 2025, grouped by Detection and Model Performance, and Product and Customer Success. These questions are timely and should change in the future as the tooling available to us evolves. We’re now at the midpoint of the AI adoption curve, past the hype and into practical implementation. Just two years ago, AI’s compute costs limited experimentation. Today, processing costs are halved each year; by 2027 it is projected that AI will cost roughly 1⁄32 of its 2023 price, meaning that what was once out of reach could now be widely applied to your organization.
Detection and Model Performance
1. Do you build your own models?
Building detection from the ground up offers total control over training data and retraining cycles, but often yields only a single model per harm type (e.g., adult content, hate speech, insults). If your policy doesn’t align with the provider’s definitions, you may be left managing ever-growing keyword lists to bridge the gap. From my experience at two such providers, I’ve seen models that performed well in testing degrade quickly in production.
For example, using one adult-content model for both a permissive dating app and a children’s game (where the word “kiss” is prohibited) left both clients dissatisfied, the model fit neither, and a robust automation strategy was limited due to the low confidence in the results.
Today’s LLM-based approaches, trained on ever-broader datasets, offer far greater flexibility. They’re not perfect, but they’re vastly more adaptable to differing community norms and policy nuances.
2. Can I build my own model - and if not, how is my feedback used?
A genuine partner welcomes this question. They should clearly explain their model-building philosophy, retraining cadence, and how your input shapes future performance. Modern AI-first platforms can integrate user feedback in near real time - often within a minute.
If the provider offers only “out-of-the-box” models, expect minimal flexibility. Mature vendors’ models are typically entrenched, meaning your feedback may have little effect. Also ask directly: “Will another client’s feedback alter the model in a way that harms my performance?”
Clarify update cycles. Many data-science teams work in 2–4 week sprints. If your issue falls outside the current sprint, you could wait months for resolution. In 2025, “We’ll handle your emergency in two months” is unacceptable. Proprietary ML makes sense only if your community closely mirrors the provider’s primary market.
3. Can you build with context in mind?
By 2025, this must be a given. Low-hanging harms (e.g., profanity) appear in single messages, but genuine protection demands contextual modeling. High-risk harms - child safety, grooming, radicalization - require contextual understanding. A single message seldom suffices. These behaviors unfold gradually, often through coded or concealed language.
I’ve personally encountered grooming cases where adults coached minors to avoid flagged keywords, knowing that a content moderation system was in place. Only by analyzing the full conversation, how the tone shifted from friendly to exploitative, could the systems detect this behavior.
4. Do you offer a multi-modal approach to detection?
Multi-modal detection combines multiple input types - text, image, video, and metadata - into a unified decision. For example, when evaluating a user campaign, you’d want the title, promotional text, and images considered together to determine intent or risk. Platforms hosting varied content formats should insist on this capability for robust, user-level or case-level insight.
Product and Customer Success
The product and customer-success teams should operate in lockstep. A CSM can’t deliver excellence without understanding product direction; likewise, product managers need client feedback to prioritize effectively. True partners blur these departmental boundaries, each reinforcing the other.
5. Within your client portfolio, which industries are represented and how prevalent are they?
If you’re the first client in your sector, expect a learning curve, but this is also an opportunity to act as a design partner. New use cases strengthen a provider’s tooling by exposing blind spots. However, dominance by a single industry poses risks. At one provider I worked for, 90% of the chat data and 60% of revenue came from one client. Unsurprisingly, the models reflected that community’s norms and the roadmap reflected that client’s needs. Understanding this weighting helps you gauge whether your needs will receive equal attention.
6. If I need a feature built, what assures the delivery and timeline?
Sales teams often over-promise with the best of intentions, but technical scoping is unpredictable. If a feature is critical, codify it in the contract. Define what “finished” means and specify deadlines or staged milestones.
This isn’t adversarial, it provides clarity. A partner will appreciate your precision and may counter-propose phased delivery to ensure feasibility. They should always probe with “Why?” so all sides understand the business value behind the request. A feature is just a feature, it’s the value derived from the feature being in place that should be clearly known too.
7. What’s on your product roadmap and who sets priorities?
A provider’s roadmap reveals direction and values. Startups may pivot rapidly; established firms move deliberately but can stagnate. Ask: What will you be building a year from now? Who drives roadmap decisions, the clients, investors, or leadership? How do you balance innovation and maintenance? Even if you sit outside of the product team yourself, perhaps in moderator operations, your experience will be shaped by these answers too.
8. How long does onboarding take for a client of my size and resources?
You’ll often hear, “We can move as quickly as you can.” Technically true, but practical value comes later. The initial integration may take hours, but achieving insight requires policy alignment, model testing, and prompt engineering. This process usually takes weeks; longer if your policies or datasets need refinement. Your readiness - documentation, moderation resources, tech stack - determines your onboarding speed. And then after onboarding you should ask what comes next?
I’m a big fan of multi-year agreements, and not just because I’m on the provider side of this industry. Multi-year contracts protect both parties by allowing enough time for genuine progress. Annual contracts trap buyers in perpetual RFP cycles, reassessing options before outcomes mature. Multi-year partnerships foster trust, stability, and long-term planning - the hallmarks of collaboration.
9. What does a successful customer look like here, and how will we communicate?
The prospective partner should be able to clearly tell you about a successful customer, but it doesn’t mean that the experience was smooth sailing all of the way. It means that they’ve achieved a state of value recognition, they’ve established trust, and ideally a repeatable process is now in place. No matter the product, in a true partnership this will require expectation setting and high quality communication, often more frequently in the onboarding phase of the relationship.
Define your ideal communication rhythm. Daily Slack chats? Monthly calls? Quarterly Business Reviews (QBRs)? All of the above? Effective QBRs should emphasize future alignment, not just past performance. They reaffirm partnership by ensuring goals remain shared. Check what cadence is included in your contract and whether it can evolve with your maturity in the tool.
10. Given my budget and resources, what’s realistically achievable?
Request real-world examples of similar clients. How much internal effort did their success require? True success depends on internal readiness: engineering time, executive sponsorship, and a culture that prioritizes the efforts of a trust & safety team. Even the best partner can’t substitute for missing internal alignment. If your boss doesn’t see the value of this endeavor, you haven’t secured the budget, or your engineering team’s time is already spoken for, success will be an up-hill battle the entire way. Identify your team, secure buy-in, and commit collectively to the journey.
Parting Thoughts
Choosing a Trust & Safety partner in 2025 requires discernment. The AI landscape evolves rapidly, but the essence of partnership is timeless: communication, alignment, and shared accountability. Ask hard questions. Expect clear answers. Success doesn’t stem from the flashiest demo but from a partner who listens, learns, and builds with you - not merely for you.