fbpx

Expert’s Corner with Alexandra Koptyaeva

This month’s expert is Alexandra Koptyaeva. She wears many hats, one of which is the Trust and Safety Lead at Heyday.

Alexandra has been working in Trust and Safety for over three and a half years starting in content moderation, investigations, and quality assurance, and later leading a team and managing company processes. Initially, she started as a content moderator in 2018 with a focus on fraud in e-commerce. Back then, T&S was not as widely known as it is now. Her first role sparked her interest in investigations — she enjoyed the thrill of detecting fake brands based on the smallest details (e.g., obscuring the brand logo). Over the years, she began to grow professionally, and in September 2022, was hired as a Trust and Safety Lead at Heyday — a new social media app. This role requires her to use all the experience and knowledge she has accumulated — she’s now responsible for everything starting from content moderation and policy management to handling escalations to NCMEC and Crisis Response services. She is excited by how her work can shape T&S at Heyday and looks forward to creating new processes from scratch, and she’s especially looking forward to having alpha and beta users on our platform.

1. How can social media platforms better detect CSAM?

There are extensive resources available to prevent CSAM from spreading beforehand and detect it in a timely manner. For example, partnering with the right Content Moderation solution would be the first step, and ensuring that the AI filters cover this topic. At Heyday, we’re also going to implement the CSAM Keyword Hub as an extra precaution. It’s open source, which is especially handy for platforms with a limited budget for T&S.

When it comes to actual detection, it’s certainly not the easiest thing to catch. Users are very creative, and certain abbreviations, emojis, or combinations of signals might indicate suspicious behavior. Personally, I’m trying to read the whitepapers, follow the updates, and attend webinars about this topic to stay informed. In addition, I also have a profile on the Heyday app so I can see what’s happening from the user’s perspective.

As I’ve been previously managing teams of Content Moderators, I’d say that CSAM detection and timely actions are a vital part of human moderation as well. Meaning that keeping the team updated, organizing regular team meetings, calibrations, and presenting recent trends should be on the agenda. The specialists should know how to recognize CSAM, what to pay attention to, what its signs are, and why it’s important to report it to senior managers and escalate it to law enforcement. If teams don’t understand its implications, then no algorithms in place will help to prevent it if a moderator approved a profile as a “false positive” and didn’t notice alarming signs.

In parallel with the internal processes, [CSAM prevention] should also be reflected in community guidelines. It’s essential to be transparent and let users know how the platform positions itself and what actions will be taken against someone who might attempt to spread CSAM.

2. Nudity and social media platforms have often had some overlap. How can platforms better position themselves in terms of moderation to avoid instances of prostitution, flashing, and possible sextortion or blackmailing?

Educating users is the key. As a policy manager, I can create the best community guidelines; but, how can one be sure that they were read, understood, and accepted?

From the internal perspective, it’s helpful to provide clear guidelines with enforcement actions for each violation category. For instance, ban a user for some duration if they upload inappropriate content, or restrict their activity for some time. On the user’s end, I do think it’s important that they receive a message with a brief explanation as to why their content was removed. It’s a small step but at least it might make a user think twice before doing it again.

Depending on the targeted age group, I’d also suggest organizing monthly campaigns to educate users and simply explain to them why it’s not okay to share this content, and how to avoid sextortion or blackmailing if it comes to that. There’s no need to scare the users away; based on my previous experience in moderation, I feel like some users (especially younger ones) are sharing nudes with others due to a lack of awareness. The person on the other end gained their trust, and they think that nothing bad will happen because “they’re friends”. It was tough for me to read user reports and complaints when they were later bullied or blackmailed. Creating pop-ups and asking “Are you sure you want to share it, etc.” might save someone’s sanity and dignity later.

If a platform is targeting 18+ users, it can be less strict about its guidelines. However, I’d still recommend having some protection in place to prevent flashing and prostitution. It can be done with the help of AI detection, effective moderation, and enforced community guidelines.

At Heyday, for example, we’ll be permanently banning users who’re confirmed for prostitution, and we’ll be implementing some ban durations for uploading or spreading inappropriate content.

3. How important is data privacy while moderating content?

From the platform’s perspective, I believe that protecting users’ privacy is a must. Apart from the formal documents where it’s reflected (e.g., Privacy Policy for users and NDA for the staff), it should be also reinforced at the company level.

As a manager, I consider it one of my responsibilities to explain to the team the consequences of exposing someone’s information by any means, or of engaging with users from the moderator’s account. If by any means I notice that either of these instructions was not followed, I’ll immediately have a follow-up that might lead to the contract termination.

Let me give you two hypothetical scenarios:

(1) A content moderator decided to take a photo of their screen and share it on their social media. A username or a profile picture was visible, so someone might have identified this person. Not only does it put the end user at risk but it also jeopardizes the platform’s integrity and undermines trust in it. I don’t think that anyone would want to be in this situation as a user of any platform.

(2) A user sent a personal message or a friend request through the platform to a content moderator (if applicable), or a CM themselves decided to engage directly with a user. In both cases, I regularly remind my team to never engage, as they represent the company when using their work profiles.

I understand that there might be stressful situations when one feels like they have to do something (e.g., when working on time-sensitive cases) to help a user but that’s why each platform should have internal policies in place that have to be regularly reinforced, so the moderators know what to do.

4. There’s often a thin line between over-moderation and censorship. How do you distinguish this?

I’ll answer this question through an example: when I began working closely with content moderators from different countries, I noticed that they tend to be more or less strict with the content depending on their origin or place of birth. Although the platform has the same internal guidelines for everyone, they were interpreted differently by specialists from Latin America, Southeast Asia, or from Europe. Some used to remove any content that was even slightly violating the rules, while others passed it because it was a “cultural norm” for them and they didn’t find it suspicious.

To make sure that everyone not only understands but also enforces policies correctly across the company, I would suggest the following:

(1) Creating very detailed internal guidelines with as few grey areas as possible, so there’s no room for self-interpretation. If anything is not clear or a specialist is in doubt, [moderators] should escalate it to their team leaders and ask for a second opinion.

(2) Organizing regular calibrations with a team — collecting several random examples from different language markets and either asking everyone to reply in a form (e.g. Google Forms), or bring them up during a team meeting, and ask specialists to vote in the chat. I’ve participated in weekly team calibrations before and it was always fun and very engaging.

(3) Doing Quality Assurance and passive shadowing with specialists — this way, a Manager can identify some inconsistencies and help improve the overall quality of moderation.

On a user level — listen to your community and what they’re saying about your company. I’d imagine that customer support might receive emails or Zendesk requests asking to explain why certain content was removed. In a way, it could also be a good resource for a policy manager and content moderators to analyze what could be improved and maybe implement some changes.

5. When thinking about online safety, how much responsibility is on the end-user and how much do you believe is the platform’s responsibility?

Depending on a targeted location, there might be some regulations in place when it comes to online safety, meaning that whether one wants it or not, a platform will be deemed responsible. I can’t express my opinion regarding some legislation, as it’s currently less strict in the US, especially when targeting the 18+ community. But I do think that it’s a question of liability and possible negligence if a platform knows what’s happening with its users, has all the means to prevent it, and yet it doesn’t for whatever reason.

Talking about the end-user, it depends on the situation. For instance, if a user joined live streaming with their camera on and got bullied by the community for no reason, then it’s probably hardly their fault. On the other hand, if a platform has policies in place to ensure that a user knows about possible consequences of sharing their personal information, for example, and yet a user still does it — then I wouldn’t say that it’s entirely a platform’s fault.

6. How can platforms better train end-users about the possible types of online harm without coming across as controlling?

a. How to ensure they’re not over moderated?

b. Being a supporter rather than forcing communities to say what’s right?

I’d say it depends on the type of platform, the age of targeted users, and possible threats it may have. Apart from the Community Guidelines which usually mention some online risks, some platforms also have pop-up notifications to double-check if a person wants to send certain information, or they blur some content and inform a user that it might be harmful or inappropriate.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

The Future of AI-Powered Content Moderation: Careers and Opportunities

As companies are grappling with the challenge of ensuring user safety and creating a welcoming environment: AI-powered content moderation has emerged as a powerful solution, revolutionizing the way organizations approach this task. In this article, we will explore the careers and opportunities that AI-powered content moderation presents, and how individuals and businesses can adapt to…
6 minutes

The Impact of Trust and Safety in Marketplaces

Nowadays, its no surprise that an unregulated marketplace with sketchy profiles, violent interactions, scams, and illegal products is doomed to fail. In the current world of online commerce, trust and safety are essential, and if users don't feel comfortable, they won’t buy. As a marketplace owner, ensuring that your platform is a safe and reliable…
9 minutes

How AI is Revolutionizing Content Moderation in Social Media Platforms

Social media platforms have become an integral part of our lives, connecting us with friends, family, and the world at large. Still, with the exponential growth of user-generated content, ensuring a safe and positive user experience has become a daunting task. This is where Artificial Intelligence (AI) comes into play, revolutionizing the way social media…
3 minutes

Customizing AI Content Moderation for Different Industries and Platforms

With the exponential growth of user-generated content across various industries and platforms, the need for effective and tailored content moderation solutions has never been more apparent. Artificial Intelligence (AI) plays a major role in automating content moderation processes, but customization is key to address the unique challenges faced by different industries and platforms. Understanding Industry-Specific…
3 minutes

Emerging Threats in AI Content Moderation : Deep Learning and Contextual Analysis 

With the rise of user-generated content across various platforms, artificial intelligence (AI) has played a crucial role in automating the moderation process. However, as AI algorithms become more sophisticated, emerging threats in content moderation are also on the horizon. This article explores two significant challenges: the use of deep learning and contextual analysis in AI…
4 minutes

The Impact of AI Content Moderation on User Experience and Engagement

User experience and user engagement are two critical metrics that businesses closely monitor to understand how their products, services, or systems are being received by customers. Now that user-generated content (UGC) is on the rise, content moderation plays a main role in ensuring a safe and positive user experience. Artificial intelligence (AI) has emerged as…
4 minutes

Future Technologies : The Next Generation of AI in Content Moderation 

With the exponential growth of user-generated content on various platforms, the task of ensuring a safe and compliant online environment has become increasingly complex. As we look toward the future, emerging technologies, particularly in the field of artificial intelligence (AI), are poised to revolutionize content moderation and usher in a new era of efficiency and…
3 minutes

Global Perspective : How AI Content Moderation Differs Across Cultures and Religion

The internet serves as a vast platform for the exchange of ideas, information, and opinions. However, this free exchange also brings challenges, including the need for content moderation to ensure that online spaces remain safe and respectful. As artificial intelligence (AI) increasingly plays a role in content moderation, it becomes essential to recognize the cultural…
5 minutes

Ethical Consideration in AI Content Moderation : Avoiding Censorship and Biais

Artificial Intelligence has revolutionized various aspects of our lives, including content moderation on online platforms. As the volume of digital content continues to grow exponentially, AI algorithms play a crucial role in filtering and managing this content. However, with great power comes great responsibility, and the ethical considerations surrounding AI content moderation are becoming increasingly…
3 minutes

The Psychology Behind AI Content Moderation: Understanding User Behavior

Social media platforms are experiencing exponential growth, with billions of users actively engaging in content creation and sharing. As the volume of user-generated content continues to rise, the challenge of content moderation becomes increasingly complex. To address this challenge, artificial intelligence (AI) has emerged as a powerful tool for automating the moderation process. However, user…
5 minutes

How to Launch a Successful Career in Trust and Safety‍

Before diving into the specifics of launching a career in Trust and Safety, it's important to have a clear understanding of what this field entails. Trust and Safety professionals are responsible for maintaining a safe and secure environment for users on digital platforms. This includes identifying and addressing harmful content, developing policies to prevent abuse,…
5 minutes

‍The Future of Dating: Embracing Video to Connect and Thrive

In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In…
4 minutes

How Predators Are Abusing Generative AI

The recent rise of generative AI has revolutionized various industries, including Trust and Safety. However, this technological advancement generates new problems. Predators have found ways to abuse generative AI, using it to carry out horrible acts such as child sex abuse material (CSAM), disinformation, fraud, and extremism. In this article, we will explore how predators…
4 minutes

What is Content Moderation: a Guide

Content moderation is one of the major aspect of managing online platforms and communities. It englobes the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. In this article, we'll provide you with a comprehensive glossary to understand the key concepts, as well as its definition, challenges and…
15 minutes

What is Doxxing: A Comprehensive Guide to Protecting Your Online Privacy

Today, protecting our online privacy has become increasingly important. One of the most concerning threats we face is doxxing. Derived from the phrase "dropping documents," doxxing refers to the act of collecting and exposing an individual's private information, with the intention of shaming, embarrassing, or even endangering them. This malicious practice has gained traction in…
7 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert