fbpx

The longest 20 days and 20 nights: how can Trust & Safety Leaders best prepare for US elections

US Elections Trust & Safety

Trust and Safety leaders during the US elections: are you tired of election coverage and frenzied political discussion yet? It’s only 20 days until the US votes to elect either Kamala Harris or Donald Trump into the White House and being a Trust and Safety professional has never been harder. Whether your site has anything to do with politics, it’s almost certainly coming up in comments, posts, reviews, or (worse) deepfake images or videos. 

While most online companies would rather avoid politics altogether, anyone engaging a community of customers is going to get roped into ‘policing’ political content at some point. Be prepared to make ‘tough decisions’ in the final weeks leading up to the election – make sure you have clear policies and that you’ve thought about those policies and can stick to them when the going gets tough. The last thing you want is to be seen as removing customers’ content arbitrarily or with a political agenda.

Trust and Safety during US elections: Business Administration meets Philosophy

Getting your policy right is critical to handling political content on your site fairly and consistently. With polls showing that more and more Americans are getting their news from social media companies, it is more crucial now than ever before that the company’s unbiased policies are being enforced uniformly and justly for everyone. These platforms serve as a virtual public square, amplifying both official and unofficial sources of information.  Protecting users from misinformation and political interference is not just a nice to have but a must have given the role online companies are having in shaping the world’s political future.  

As the election countdown clock winds down to zero and our next US President is announced, online companies will face great hurdles in content moderation. As we have seen misinformation is rampant, “they’re eating the dogs, they’re eating the cats”, “ you can legally abort a child after it is born”, “The JD Vance couch joke that turned into a trending story”; companies have important content moderation decisions to make. This is where strong policies and guidance help take personal belief out of content moderation decisions, and the right decision is able to be made even if the moderator does not personally believe in it.  

So What? If you want to believe misinformation that is on you, not anyone else

Where does our responsibility begin, and more importantly where does it end? Many have argued that it is not the role of online companies to “censor” what is being said, open the floodgates and let everything through. People can then decide on their own what is true and what is misinformation. History has shown us that it is too high of a responsibility to place on humans. With sophisticated tooling available to create deep fakes, AI generated content that could and does fool human review to simple tools such as podcasts, posting images and creating quick videos, online companies must step in and take their content moderation responsibilities seriously.

When you create a platform to give people a voice, that also comes with a responsibility to ensure that the platform does not give voice to those spreading misinformation, hate, or harmful content. Words have consequences, some intended and some not. During this election we have seen unchecked comments made during the Presidential debate have harmful offline actions, for example causing children to be evacuated from their school multiple days in a row based on threats of violence.

Applying the Trust and Safety recipe to prevent disaster from happening during the US elections

Working with our customers we’ve seen a range of common behaviours that need a Trust & Safety lens:

  • Political Grandstanding in comments, reviews, or profiles where it’s not relevant
  • Debate and discussion turned to fights and threats
  • Viral misinformation spread for shock value and rage-baiting
  • Deepfakes and AI Generated Content
  • Bots spamming or amplifying content rather than your community

What can be done? Online platforms can implement strategies such as well developed and clear policies, third party fact checking, investing in AI and human moderation, real time content moderation and appeals process implementations. 

Having a full Trust and Safety toolkit (built around your policies) is a great starting point to ensure that you’re ready to enforce your policy fairly and consistently. For Checkstep customers, we’ve seen that blending human moderation and AI scanning and automation unlocks opportunities to identify and (where necessary) remove or restrict the most harmful content in the political firestorm. Following the news and reviewing community reported content in your platform can surface ‘fast moving’ conversations about misinformation, deepfakes, or other dangerous content. With keywords or (more effectively) with LLM labels, you can easily pick out content that references emerging trends and review them with Checkstep’s platform.

It’s not just about finding harmful content and protecting your community – you also need tools that help you ensure that you’re appropriately enforcing content without bias given the highly charged political climate. Checkstep gives you the ability to regularly run QA with secondary (or tertiary!) moderator reviews to identify areas where your AI or your moderation is overly restrictive or under-enforcing. Find the areas where your moderators don’t agree with decisions and use them to inspect your operation and your policy!

Conclusion

None of this is easy, and online platforms have to strive to do the best they can to help combat and prevent this harm. Avoiding censorship while balancing users’ right to freedom of speech is a delicate dance of policy implementation and social responsibility. Online companies must get content moderation right not just as a technical issue but as a matter of safeguarding democracy itself. 

What you can control as a Trust & Safety Leader: 

  • Share this link with your colleagues to make sure everyone knows where their polling place is,
  • Fill out the form below or click here to audit your current moderation system with us, and make sure you have no policy or technology gaps.

Audit your current moderation system

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

What is Content Moderation ? 

Content moderation is the strategic process of evaluating, filtering, and regulating user-generated content on digital ecosystems. It plays a crucial role in fostering a safe and positive user experience by removing or restricting content that violates community guidelines, is harmful, or could offend users. An effective moderation system is designed to strike a delicate balance…
5 minutes

A Guide to Detect Fake User Accounts

Online social media platforms have become an major part of our daily lives: with the ability to send messages, share files, and connect with others, these networks provide a way, for us users, to stay connected. Those platforms are dealing with a rise of fake accounts and online fraudster making maintaining the security of their…
4 minutes

How to Keep your Online Community Abuse-Free

The Internet & Community Building In the past, if you were really into something niche, finding others who shared your passion in your local area was tough. You might have felt like you were the only one around who had that particular interest. But things have changed a lot since then. Now, thanks to the…
6 minutes

UK far-right riots: Trust & Safety solutions for online platforms

The far-right riots in the UK The recent UK far-right riots, undeniably fuelled by misinformation spread on social media platforms serves as a stark reminder of the urgent need for online platforms to adapt their content moderation policies and work closely with regulators to prevent such tragedies. The consequences of inaction and noncompliance are serious,…
10 minutes

The Future of AI-Powered Content Moderation: Careers and Opportunities

As companies are grappling with the challenge of ensuring user safety and creating a welcoming environment: AI-powered content moderation has emerged as a powerful solution, revolutionizing the way organizations approach this task. In this article, we will explore the careers and opportunities that AI-powered content moderation presents, and how individuals and businesses can adapt to…
6 minutes

Text Moderation: Scale your content moderation with AI

In today's interconnected world, text-based communication has become a fundamental part of our daily lives. However, with the exponential growth of user-generated text content on digital platforms, ensuring a safe and inclusive online environment has become a daunting task. Text moderation plays a critical role in filtering and managing user-generated content to prevent harmful or…
4 minutes

Ensuring Child Safety Online: The Role of Trust & Safety Teams

Children are now growing up with technology as an integral part of their lives. With the increase of smartphones, tablets, and internet-connected devices, it is important for parents, educators, and technology companies to prioritize children's online safety. This shared responsibility requires collaboration, best practices, and strategies to create a secure and user-friendly virtual environment. By…
5 minutes

9 Industries Benefiting from AI Content Moderation

As the internet becomes an integral part of people's lives, industries have moved towards having a larger online presence. Many businesses in these industries have developed online platforms where user-generated content (UGC) plays a major role. From the rise of online healthcare to the invention of e-learning, all of these promote interaction between parties through…
8 minutes

Why emerging trends put your user retention at risk – and how to fix it with flexible LLM prompts

Emerging trends can severely threaten user retention We've recently seen how hate speech and misinformation can put user retention at risk during the recent UK far-right riots. Recent events like the UK far-right riots have highlighted how unchecked hate speech and misinformation can severely threaten user retention. When harmful content spreads without effective moderation, it…
5 minutes

Expert’s Corner with Head of Research Isabelle Augenstein

This month we were very happy to sit down with one of the brains behind Checkstep who is also a recognized talent among European academics. She is the co-head of research at Checkstep and also an associate professor at the University of Copenhagen. She currently holds a prestigious DFF Sapere Aude Research Leader fellowship on ‘Learning to…
5 minutes

The Psychology Behind AI Content Moderation: Understanding User Behavior

Social media platforms are experiencing exponential growth, with billions of users actively engaging in content creation and sharing. As the volume of user-generated content continues to rise, the challenge of content moderation becomes increasingly complex. To address this challenge, artificial intelligence (AI) has emerged as a powerful tool for automating the moderation process. However, user…
5 minutes

Audio Moderation: AI-Driven Strategies to Combat Online Threats

In today's digitally-driven world, audio content has become an integral part of online platforms, ranging from podcasts and audiobooks to user-generated audio clips on social media. With the increasing volume of audio content being generated daily, audio moderation has become a critical aspect of maintaining a safe and positive user experience. Audio moderation involves systematically…
4 minutes

The Impact of Trust and Safety in Marketplaces

Nowadays, its no surprise that an unregulated marketplace with sketchy profiles, violent interactions, scams, and illegal products is doomed to fail. In the current world of online commerce, trust and safety are essential, and if users don't feel comfortable, they won’t buy. As a marketplace owner, ensuring that your platform is a safe and reliable…
9 minutes

Why moderation has become essential for UGC 

User-Generated Content (UGC) has become an integral part of online participation. Any type of material—whether it's text, photos, videos, reviews, or discussions—that is made and shared by people instead of brands or official content providers is called user-generated content. Representing variety and honesty, it is the online community's collective voice. Let's explore user-generated content (UGC)…
6 minutes

Building a trusted and authentic social shopping experience: Bloop partners with Checkstep for comprehensive moderation and compliance solutions

The fast-growing ecommerce startup Bloop has partnered with Checkstep to safeguard the user-generated content (UGC) on its new social shopping platform, ensuring Trust and Safety for users. About Bloop Bloop is reshaping the social shopping landscape by rewarding real consumer influence. Bloop combines the best elements of social networking and marketplace platforms. The team aims…
4 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert