Expert’s Corner with Lauren Tharp from Tech Coalition

For this month’s Expert’s Corner we had the pleasure of interviewing Lauren Tharp from the Tech Coalition.

The Tech Coalition is a global alliance of leading technology firms that have come together to combat online sexual abuse and exploitation of children. Because member companies have the same goals and face many of the same challenges, we know that collaborating to develop and scale solutions offers the most promising path towards a global solution to the problem.

Lauren Tharp is a Technical Program Manager based in Richmond, VA. She focuses her time helping companies adopt technologies to combat online child exploitation and abuse, and facilitates collaboration among a diverse set of industry members. Prior to joining the Tech Coalition, she worked as a Product Leader in the podcasting space where she learned the importance of Trust and Safety through the lens of brand safety. The answers included are not intended to represent any member of the TC or affiliated partners.

1. What is the mission of the Tech Coalition? What is the idea behind it?

The Tech Coalition facilitates the global tech industry’s fight against the online sexual abuse and exploitation of children. We are the place where tech companies all over the world come together on this important issue, recognizing that this is not an issue that can be tackled in isolation. We work relentlessly to coach, support, and inspire our industry Members to work together as a team to do their utmost to achieve this goal.

Every half second, a child makes their first click online — and the tools we all value most about the internet — our ability to create, share, learn and connect — are the same tools that are being exploited by those who seek to harm children. In this increasingly digital world, the technology industry bears a special responsibility to ensure that its platforms are not used to facilitate the sexual exploitation and abuse of children. Child protection is one place where our Members do not compete, but rather they work together to pool their collective knowledge, experience, and advances in technology to help one another keep children safe.

An example of how our work comes together is tech innovation — this is also the work I’m most passionate about. Our Tech Innovation working group exists first and foremost to increase our member’s technical capabilities to combat online CSAM. That means that even the smallest startups have access to the same knowledge and tools for how to detect and prevent CSAM as the largest tech companies in the world. We help members adopt existing technologies, such as technology to find known CSAM images and videos. We also fund pilots to innovate on new solutions, such as machine learning to detect novel CSAM to mitigate the need for human review. We also work closely with THORN — who have been invaluable partners in developing technology — and other subject matter experts throughout the industry to push innovation for our members.

2. With more and more children spending time online, online child safety should always be a priority for social media platforms. Have you seen specific trends / patterns in terms of child abuse? Are they getting harder to detect?

I’d say there are two major factors making it harder to readily detect online child sexual exploitation and abuse (OSCEA). The first is access. Many of us spend a significant amount of time online where we’re engaging, not only with trusted family and friends, but also have contact with strangers. This has largely been a success — think about cold outreach for a new job or finding peers who have a similar niche hobby. But the tradeoff to this access is that it also made it easier for bad actors to make contact with or groom children online. Recent studies have shown that nearly 40% of children have been approached by adults who they believe were trying to “befriend and manipulate them”. So I think we will continue to face the challenge of how to safeguard children online as bad actors subvert protective measures at an increasingly rapid pace.

The second factor is new content. Online CSAM has often taken the form of photos or pre-recorded videos, and so technology developed to detect those types of abuse. As users adopt new technologies such as live streaming, podcasting, direct message platforms, and gaming channels, the detection tools have to be trained towards those use cases. The difficulty is in keeping up with the pace of that change, and anticipating where abuse might occur next.

3. When thinking about moderation, human moderation alone is unable to deal with the scale of the issue. What types of AI do you see innovating in this space to help companies keep up with increasing volumes?

Human moderators are such an important part of keeping the Internet safe and free of child abuse imagery, but as noted, the scale of the problem requires innovative solutions (not to mention the psychological toll endured by many content moderators). To address these challenges, many companies use hashing technologies across photos and videos to detect and remove known CSAM. Hashing works by creating a digital “fingerprint” of photos or videos that have been deemed CSAM by a human moderator. These hashes are then stored in various databases that can be used across industry to automatically detect when the content is shared. The high degree of accuracy means less human moderation on content that we already know is violating.

While hashing is excellent at preventing the spread of known CSAM, it cannot detect new photos or videos. This is where true AI comes into play, also known as classifiers. Classifiers use machine learning to automatically detect if content falls into various categories, such as nudity, age, sexual acts, and more. By combining these categories, many companies can make quick decisions about which content should be escalated for review, versus content that does not meet the definition of CSAM.

4. Organizations like NCMEC, Thorn and IWF help with detection of child sexual abusive material (CSAM) but what about child grooming? This is often harder to detect. How can platforms better prepare themselves?

Grooming is a complex topic for many reasons, including the fact that there is no standard definition of what constitutes grooming, and it can vary by platform, language, culture, and other factors. In short, grooming is all about context. A classic example is the phrase “Do you want to meet on Friday?” which could be appropriate on a dating app, but is potentially inappropriate on a children’s gaming platform, again, depending on the context.

As a result, an organization’s approach to grooming detection must evolve over time to accommodate new trends. We typically recommend that companies start with keyword lists,

such as the CSAM Keyword Hub, which the Tech Coalition developed in partnership with THORN. The hub pulls known terms and slang related to online grooming and child sexual abuse so that organizations can begin to filter terms and adjust their content moderation strategies.

Increasingly we see a shift towards using AI to detect grooming by analyzing text (such as in public conversation channels) and by noticing behavioral signals (such as an adult user randomly befriending 100 minors in the span of an hour). The Coalition is working on training grooming classifiers on specific platform use cases, and continues to fund research to understand perpetrators grooming strategies.

5. Smaller platforms tend to have limited resources in dealing with online harms. Be it child safety or hateful content. What advice would you give them?

My primary advice would be to reach out! The Tech Coalition has a robust set of resources, mentorship opportunities, innovative tooling, webinars, and much more to help the smallest companies get started. Additionally, many tech companies like Google and Meta offer free tooling and support to ensure platforms of any size can start mitigating abusive content from being shared on their platform. But if I could offer a true first step, it would be to start learning about the scale and nature of the problem as early as possible. If you allow content to be uploaded or conversations to occur, please consider child safety within your design and product flows to ensure children can use your platform without fear of harm.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

The Future of AI-Powered Content Moderation: Careers and Opportunities

As companies are grappling with the challenge of ensuring user safety and creating a welcoming environment: AI-powered content moderation has emerged as a powerful solution, revolutionizing the way organizations approach this task. In this article, we will explore the careers and opportunities that AI-powered content moderation presents, and how individuals and businesses can adapt to…
6 minutes

The Impact of Trust and Safety in Marketplaces

Nowadays, its no surprise that an unregulated marketplace with sketchy profiles, violent interactions, scams, and illegal products is doomed to fail. In the current world of online commerce, trust and safety are essential, and if users don't feel comfortable, they won’t buy. As a marketplace owner, ensuring that your platform is a safe and reliable…
9 minutes

How AI is Revolutionizing Content Moderation in Social Media Platforms

Social media platforms have become an integral part of our lives, connecting us with friends, family, and the world at large. Still, with the exponential growth of user-generated content, ensuring a safe and positive user experience has become a daunting task. This is where Artificial Intelligence (AI) comes into play, revolutionizing the way social media…
3 minutes

Customizing AI Content Moderation for Different Industries and Platforms

With the exponential growth of user-generated content across various industries and platforms, the need for effective and tailored content moderation solutions has never been more apparent. Artificial Intelligence (AI) plays a major role in automating content moderation processes, but customization is key to address the unique challenges faced by different industries and platforms. Understanding Industry-Specific…
3 minutes

Emerging Threats in AI Content Moderation : Deep Learning and Contextual Analysis 

With the rise of user-generated content across various platforms, artificial intelligence (AI) has played a crucial role in automating the moderation process. However, as AI algorithms become more sophisticated, emerging threats in content moderation are also on the horizon. This article explores two significant challenges: the use of deep learning and contextual analysis in AI…
4 minutes

The Impact of AI Content Moderation on User Experience and Engagement

User experience and user engagement are two critical metrics that businesses closely monitor to understand how their products, services, or systems are being received by customers. Now that user-generated content (UGC) is on the rise, content moderation plays a main role in ensuring a safe and positive user experience. Artificial intelligence (AI) has emerged as…
4 minutes

Future Technologies : The Next Generation of AI in Content Moderation 

With the exponential growth of user-generated content on various platforms, the task of ensuring a safe and compliant online environment has become increasingly complex. As we look toward the future, emerging technologies, particularly in the field of artificial intelligence (AI), are poised to revolutionize content moderation and usher in a new era of efficiency and…
3 minutes

Global Perspective : How AI Content Moderation Differs Across Cultures and Religion

The internet serves as a vast platform for the exchange of ideas, information, and opinions. However, this free exchange also brings challenges, including the need for content moderation to ensure that online spaces remain safe and respectful. As artificial intelligence (AI) increasingly plays a role in content moderation, it becomes essential to recognize the cultural…
5 minutes

Ethical Consideration in AI Content Moderation : Avoiding Censorship and Biais

Artificial Intelligence has revolutionized various aspects of our lives, including content moderation on online platforms. As the volume of digital content continues to grow exponentially, AI algorithms play a crucial role in filtering and managing this content. However, with great power comes great responsibility, and the ethical considerations surrounding AI content moderation are becoming increasingly…
3 minutes

The Psychology Behind AI Content Moderation: Understanding User Behavior

Social media platforms are experiencing exponential growth, with billions of users actively engaging in content creation and sharing. As the volume of user-generated content continues to rise, the challenge of content moderation becomes increasingly complex. To address this challenge, artificial intelligence (AI) has emerged as a powerful tool for automating the moderation process. However, user…
5 minutes

How Predators Are Abusing Generative AI

The recent rise of generative AI has revolutionized various industries, including Trust and Safety. However, this technological advancement generates new problems. Predators have found ways to abuse generative AI, using it to carry out horrible acts such as child sex abuse material (CSAM), disinformation, fraud, and extremism. In this article, we will explore how predators…
4 minutes

What is Content Moderation: a Guide

Content moderation is one of the major aspect of managing online platforms and communities. It englobes the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. In this article, we'll provide you with a comprehensive glossary to understand the key concepts, as well as its definition, challenges and…
15 minutes

‍The Future of Dating: Embracing Video to Connect and Thrive

In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In…
4 minutes

17 Questions Trust and Safety Leaders Should Be Able to Answer 

A Trust and Safety leader plays a crucial role in ensuring the safety and security of a platform or community. Here are 17 important questions that a Trust and Safety leader should be able to answer.  What are the key goals and objectives of the Trust and Safety team? The key goals of the Trust…
6 minutes

How to Launch a Successful Career in Trust and Safety‍

Before diving into the specifics of launching a career in Trust and Safety, it's important to have a clear understanding of what this field entails. Trust and Safety professionals are responsible for maintaining a safe and secure environment for users on digital platforms. This includes identifying and addressing harmful content, developing policies to prevent abuse,…
5 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert