Trust and Safety leaders during the US elections: are you tired of election coverage and frenzied political discussion yet? It’s only 20 days until the US votes to elect either Kamala Harris or Donald Trump into the White House and being a Trust and Safety professional has never been harder. Whether your site has anything to do with politics, it’s almost certainly coming up in comments, posts, reviews, or (worse) deepfake images or videos.
While most online companies would rather avoid politics altogether, anyone engaging a community of customers is going to get roped into ‘policing’ political content at some point. Be prepared to make ‘tough decisions’ in the final weeks leading up to the election – make sure you have clear policies and that you’ve thought about those policies and can stick to them when the going gets tough. The last thing you want is to be seen as removing customers’ content arbitrarily or with a political agenda.
Trust and Safety during US elections: Business Administration meets Philosophy
Getting your policy right is critical to handling political content on your site fairly and consistently. With polls showing that more and more Americans are getting their news from social media companies, it is more crucial now than ever before that the company’s unbiased policies are being enforced uniformly and justly for everyone. These platforms serve as a virtual public square, amplifying both official and unofficial sources of information. Protecting users from misinformation and political interference is not just a nice to have but a must have given the role online companies are having in shaping the world’s political future.
As the election countdown clock winds down to zero and our next US President is announced, online companies will face great hurdles in content moderation. As we have seen misinformation is rampant, “they’re eating the dogs, they’re eating the cats”, “ you can legally abort a child after it is born”, “The JD Vance couch joke that turned into a trending story”; companies have important content moderation decisions to make. This is where strong policies and guidance help take personal belief out of content moderation decisions, and the right decision is able to be made even if the moderator does not personally believe in it.
So What? If you want to believe misinformation that is on you, not anyone else
Where does our responsibility begin, and more importantly where does it end? Many have argued that it is not the role of online companies to “censor” what is being said, open the floodgates and let everything through. People can then decide on their own what is true and what is misinformation. History has shown us that it is too high of a responsibility to place on humans. With sophisticated tooling available to create deep fakes, AI generated content that could and does fool human review to simple tools such as podcasts, posting images and creating quick videos, online companies must step in and take their content moderation responsibilities seriously.
When you create a platform to give people a voice, that also comes with a responsibility to ensure that the platform does not give voice to those spreading misinformation, hate, or harmful content. Words have consequences, some intended and some not. During this election we have seen unchecked comments made during the Presidential debate have harmful offline actions, for example causing children to be evacuated from their school multiple days in a row based on threats of violence.
Applying the Trust and Safety recipe to prevent disaster from happening during the US elections
Working with our customers we’ve seen a range of common behaviours that need a Trust & Safety lens:
- Political Grandstanding in comments, reviews, or profiles where it’s not relevant
- Debate and discussion turned to fights and threats
- Viral misinformation spread for shock value and rage-baiting
- Deepfakes and AI Generated Content
- Bots spamming or amplifying content rather than your community
What can be done? Online platforms can implement strategies such as well developed and clear policies, third party fact checking, investing in AI and human moderation, real time content moderation and appeals process implementations.
Having a full Trust and Safety toolkit (built around your policies) is a great starting point to ensure that you’re ready to enforce your policy fairly and consistently. For Checkstep customers, we’ve seen that blending human moderation and AI scanning and automation unlocks opportunities to identify and (where necessary) remove or restrict the most harmful content in the political firestorm. Following the news and reviewing community reported content in your platform can surface ‘fast moving’ conversations about misinformation, deepfakes, or other dangerous content. With keywords or (more effectively) with LLM labels, you can easily pick out content that references emerging trends and review them with Checkstep’s platform.
It’s not just about finding harmful content and protecting your community – you also need tools that help you ensure that you’re appropriately enforcing content without bias given the highly charged political climate. Checkstep gives you the ability to regularly run QA with secondary (or tertiary!) moderator reviews to identify areas where your AI or your moderation is overly restrictive or under-enforcing. Find the areas where your moderators don’t agree with decisions and use them to inspect your operation and your policy!
Conclusion
None of this is easy, and online platforms have to strive to do the best they can to help combat and prevent this harm. Avoiding censorship while balancing users’ right to freedom of speech is a delicate dance of policy implementation and social responsibility. Online companies must get content moderation right not just as a technical issue but as a matter of safeguarding democracy itself.
What you can control as a Trust & Safety Leader:
- Share this link with your colleagues to make sure everyone knows where their polling place is,
- Fill out the form below or click here to audit your current moderation system with us, and make sure you have no policy or technology gaps.