fbpx

Expert’s Corner with Lauren Tharp from Tech Coalition

For this month’s Expert’s Corner we had the pleasure of interviewing Lauren Tharp from the Tech Coalition.

The Tech Coalition is a global alliance of leading technology firms that have come together to combat online sexual abuse and exploitation of children. Because member companies have the same goals and face many of the same challenges, we know that collaborating to develop and scale solutions offers the most promising path towards a global solution to the problem.

Lauren Tharp is a Technical Program Manager based in Richmond, VA. She focuses her time helping companies adopt technologies to combat online child exploitation and abuse, and facilitates collaboration among a diverse set of industry members. Prior to joining the Tech Coalition, she worked as a Product Leader in the podcasting space where she learned the importance of Trust and Safety through the lens of brand safety. The answers included are not intended to represent any member of the TC or affiliated partners.

1. What is the mission of the Tech Coalition? What is the idea behind it?

The Tech Coalition facilitates the global tech industry’s fight against the online sexual abuse and exploitation of children. We are the place where tech companies all over the world come together on this important issue, recognizing that this is not an issue that can be tackled in isolation. We work relentlessly to coach, support, and inspire our industry Members to work together as a team to do their utmost to achieve this goal.

Every half second, a child makes their first click online — and the tools we all value most about the internet — our ability to create, share, learn and connect — are the same tools that are being exploited by those who seek to harm children. In this increasingly digital world, the technology industry bears a special responsibility to ensure that its platforms are not used to facilitate the sexual exploitation and abuse of children. Child protection is one place where our Members do not compete, but rather they work together to pool their collective knowledge, experience, and advances in technology to help one another keep children safe.

An example of how our work comes together is tech innovation — this is also the work I’m most passionate about. Our Tech Innovation working group exists first and foremost to increase our member’s technical capabilities to combat online CSAM. That means that even the smallest startups have access to the same knowledge and tools for how to detect and prevent CSAM as the largest tech companies in the world. We help members adopt existing technologies, such as technology to find known CSAM images and videos. We also fund pilots to innovate on new solutions, such as machine learning to detect novel CSAM to mitigate the need for human review. We also work closely with THORN — who have been invaluable partners in developing technology — and other subject matter experts throughout the industry to push innovation for our members.

2. With more and more children spending time online, online child safety should always be a priority for social media platforms. Have you seen specific trends / patterns in terms of child abuse? Are they getting harder to detect?

I’d say there are two major factors making it harder to readily detect online child sexual exploitation and abuse (OSCEA). The first is access. Many of us spend a significant amount of time online where we’re engaging, not only with trusted family and friends, but also have contact with strangers. This has largely been a success — think about cold outreach for a new job or finding peers who have a similar niche hobby. But the tradeoff to this access is that it also made it easier for bad actors to make contact with or groom children online. Recent studies have shown that nearly 40% of children have been approached by adults who they believe were trying to “befriend and manipulate them”. So I think we will continue to face the challenge of how to safeguard children online as bad actors subvert protective measures at an increasingly rapid pace.

The second factor is new content. Online CSAM has often taken the form of photos or pre-recorded videos, and so technology developed to detect those types of abuse. As users adopt new technologies such as live streaming, podcasting, direct message platforms, and gaming channels, the detection tools have to be trained towards those use cases. The difficulty is in keeping up with the pace of that change, and anticipating where abuse might occur next.

3. When thinking about moderation, human moderation alone is unable to deal with the scale of the issue. What types of AI do you see innovating in this space to help companies keep up with increasing volumes?

Human moderators are such an important part of keeping the Internet safe and free of child abuse imagery, but as noted, the scale of the problem requires innovative solutions (not to mention the psychological toll endured by many content moderators). To address these challenges, many companies use hashing technologies across photos and videos to detect and remove known CSAM. Hashing works by creating a digital “fingerprint” of photos or videos that have been deemed CSAM by a human moderator. These hashes are then stored in various databases that can be used across industry to automatically detect when the content is shared. The high degree of accuracy means less human moderation on content that we already know is violating.

While hashing is excellent at preventing the spread of known CSAM, it cannot detect new photos or videos. This is where true AI comes into play, also known as classifiers. Classifiers use machine learning to automatically detect if content falls into various categories, such as nudity, age, sexual acts, and more. By combining these categories, many companies can make quick decisions about which content should be escalated for review, versus content that does not meet the definition of CSAM.

4. Organizations like NCMEC, Thorn and IWF help with detection of child sexual abusive material (CSAM) but what about child grooming? This is often harder to detect. How can platforms better prepare themselves?

Grooming is a complex topic for many reasons, including the fact that there is no standard definition of what constitutes grooming, and it can vary by platform, language, culture, and other factors. In short, grooming is all about context. A classic example is the phrase “Do you want to meet on Friday?” which could be appropriate on a dating app, but is potentially inappropriate on a children’s gaming platform, again, depending on the context.

As a result, an organization’s approach to grooming detection must evolve over time to accommodate new trends. We typically recommend that companies start with keyword lists,

such as the CSAM Keyword Hub, which the Tech Coalition developed in partnership with THORN. The hub pulls known terms and slang related to online grooming and child sexual abuse so that organizations can begin to filter terms and adjust their content moderation strategies.

Increasingly we see a shift towards using AI to detect grooming by analyzing text (such as in public conversation channels) and by noticing behavioral signals (such as an adult user randomly befriending 100 minors in the span of an hour). The Coalition is working on training grooming classifiers on specific platform use cases, and continues to fund research to understand perpetrators grooming strategies.

5. Smaller platforms tend to have limited resources in dealing with online harms. Be it child safety or hateful content. What advice would you give them?

My primary advice would be to reach out! The Tech Coalition has a robust set of resources, mentorship opportunities, innovative tooling, webinars, and much more to help the smallest companies get started. Additionally, many tech companies like Google and Meta offer free tooling and support to ensure platforms of any size can start mitigating abusive content from being shared on their platform. But if I could offer a true first step, it would be to start learning about the scale and nature of the problem as early as possible. If you allow content to be uploaded or conversations to occur, please consider child safety within your design and product flows to ensure children can use your platform without fear of harm.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Fake Dating Pictures: A Comprehensive Guide to Identifying and Managing 

In the world of online dating, fake dating pictures are harmful, as pictures play a crucial role in making a strong first impression. However, not all dating pictures are created equal. There is a growing concern about fake profiles using deceptive or doctored images.  To navigate the online dating landscape successfully, it's important to know
5 minutes

What is Content Moderation? 

Content moderation is the strategic process of evaluating, filtering, and regulating user-generated content on digital ecosystems. It plays a crucial role in fostering a safe and positive user experience by removing or restricting content that violates community guidelines, is harmful, or could offend users. An effective content moderation system is designed to strike a delicate
5 minutes

Transforming Text Moderation with Content Moderation AI

In today's interconnected world, text-based communication has become a fundamental part of our daily lives. However, with the exponential growth of user-generated text content on digital platforms, ensuring a safe and inclusive online environment has become a daunting task. Text moderation plays a critical role in filtering and managing user-generated content to prevent harmful or
4 minutes

Streamline Audio Moderation with the Power of AI

In today's digitally-driven world, audio content has become an integral part of online platforms, ranging from podcasts and audiobooks to user-generated audio clips on social media. With the increasing volume of audio content being generated daily, audio moderation has become a critical aspect of maintaining a safe and positive user experience. Audio moderation involves systematically
4 minutes

It’s Scale or Fail with AI in Video Moderation

In the digital age, video content has become a driving force across online platforms, shaping the way we communicate, entertain, and share experiences. With this exponential growth, content moderation has become a critical aspect of maintaining a safe and inclusive online environment. The sheer volume of user-generated videos poses significant challenges for platforms, necessitating advanced
4 minutes

Enable and Scale AI for Podcast Moderation

The podcasting industry has experienced an explosive growth in recent years, with millions of episodes being published across various platforms every day. As the volume of audio content surges, ensuring a safe and trustworthy podcast environment becomes a paramount concern. Podcast moderation plays a crucial role in filtering and managing podcast episodes to prevent the
4 minutes

Ready or Not, AI Is Coming to Content Moderation

As digital platforms and online communities continue to grow, content moderation becomes increasingly critical to ensure safe and positive user experiences. Manual content moderation by human moderators is effective but often falls short when dealing with the scale and complexity of user-generated content. Ready or not, AI is coming to content moderation operations, revolutionizing the
5 minutes

How to Protect the Mental Health of Content Moderators? 

Content moderation has become an essential aspect of managing online platforms and ensuring a safe user experience. Behind the scenes, content moderators play a crucial role in reviewing user-generated content, filtering out harmful or inappropriate materials, and upholding community guidelines. However, the task of content moderation is not without its challenges, as it exposes moderators
4 minutes

Scaling Content Moderation Through AI Pays Off, No Matter the Investment

In the rapidly evolving digital landscape, user-generated content has become the lifeblood of online platforms, from social media giants to e-commerce websites. With the surge in content creation, content moderation has become a critical aspect of maintaining a safe and reputable online environment. As the volume of user-generated content continues to grow, manual content moderation
4 minutes

Overhaul Image Moderation with the Power of AI

In today's digital world, visual content plays a significant role in online platforms, ranging from social media to e-commerce websites. With the exponential growth of user-generated images, ensuring a safe and inclusive user experience has become a paramount concern for platform owners. However, image moderation poses unique challenges due to the sheer volume, diverse content,
4 minutes

Outsourcing Content Moderation

Outsourcing content moderation has become an essential aspect of managing online platforms in the digital age. With the exponential growth of user-generated content, businesses are faced with the challenge of maintaining a safe and inclusive environment for their users while protecting their brand reputation. To address this, many companies are turning to outsourcing content moderation
4 minutes

Designing for Trust in 2023: How to Create User-Friendly Designs that Keep Users Safe

The Significance of designing for trust in the Digital World In today's digital landscape, building trust with users is essential for operating a business online. Trust is the foundation of successful user interactions and transactions, it is key to encouraging users to share personal information, make purchases, and interact with website content. Without trust, users
5 minutes

Content Moderation: A Comprehensive Guide

Content moderation is a crucial aspect of managing online platforms and communities. It involves the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. To navigate this landscape effectively, it's essential to understand the terminology associated with content moderation. In this article, we'll delve into a comprehensive glossary
7 minutes

How Predators Are Abusing Generative AI

The recent rise of generative AI has revolutionized various industries, including Trust and Safety. However, this technological advancement generates new problems. Predators have found ways to abuse generative AI, using it to carry out horrible acts such as child sex abuse material (CSAM), disinformation, fraud, and extremism. In this article, we will explore how predators
4 minutes

The Future of Dating: Embracing Video to Connect and Thrive

‍In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In
4 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert