fbpx

Expert’s Corner with Lauren Tharp from Tech Coalition

For this month’s Expert’s Corner we had the pleasure of interviewing Lauren Tharp from the Tech Coalition.

The Tech Coalition is a global alliance of leading technology firms that have come together to combat online sexual abuse and exploitation of children. Because member companies have the same goals and face many of the same challenges, we know that collaborating to develop and scale solutions offers the most promising path towards a global solution to the problem.

Lauren Tharp is a Technical Program Manager based in Richmond, VA. She focuses her time helping companies adopt technologies to combat online child exploitation and abuse, and facilitates collaboration among a diverse set of industry members. Prior to joining the Tech Coalition, she worked as a Product Leader in the podcasting space where she learned the importance of Trust and Safety through the lens of brand safety. The answers included are not intended to represent any member of the TC or affiliated partners.

1. What is the mission of the Tech Coalition? What is the idea behind it?

The Tech Coalition facilitates the global tech industry’s fight against the online sexual abuse and exploitation of children. We are the place where tech companies all over the world come together on this important issue, recognizing that this is not an issue that can be tackled in isolation. We work relentlessly to coach, support, and inspire our industry Members to work together as a team to do their utmost to achieve this goal.

Every half second, a child makes their first click online — and the tools we all value most about the internet — our ability to create, share, learn and connect — are the same tools that are being exploited by those who seek to harm children. In this increasingly digital world, the technology industry bears a special responsibility to ensure that its platforms are not used to facilitate the sexual exploitation and abuse of children. Child protection is one place where our Members do not compete, but rather they work together to pool their collective knowledge, experience, and advances in technology to help one another keep children safe.

An example of how our work comes together is tech innovation — this is also the work I’m most passionate about. Our Tech Innovation working group exists first and foremost to increase our member’s technical capabilities to combat online CSAM. That means that even the smallest startups have access to the same knowledge and tools for how to detect and prevent CSAM as the largest tech companies in the world. We help members adopt existing technologies, such as technology to find known CSAM images and videos. We also fund pilots to innovate on new solutions, such as machine learning to detect novel CSAM to mitigate the need for human review. We also work closely with THORN — who have been invaluable partners in developing technology — and other subject matter experts throughout the industry to push innovation for our members.

2. With more and more children spending time online, online child safety should always be a priority for social media platforms. Have you seen specific trends / patterns in terms of child abuse? Are they getting harder to detect?

I’d say there are two major factors making it harder to readily detect online child sexual exploitation and abuse (OSCEA). The first is access. Many of us spend a significant amount of time online where we’re engaging, not only with trusted family and friends, but also have contact with strangers. This has largely been a success — think about cold outreach for a new job or finding peers who have a similar niche hobby. But the tradeoff to this access is that it also made it easier for bad actors to make contact with or groom children online. Recent studies have shown that nearly 40% of children have been approached by adults who they believe were trying to “befriend and manipulate them”. So I think we will continue to face the challenge of how to safeguard children online as bad actors subvert protective measures at an increasingly rapid pace.

The second factor is new content. Online CSAM has often taken the form of photos or pre-recorded videos, and so technology developed to detect those types of abuse. As users adopt new technologies such as live streaming, podcasting, direct message platforms, and gaming channels, the detection tools have to be trained towards those use cases. The difficulty is in keeping up with the pace of that change, and anticipating where abuse might occur next.

3. When thinking about moderation, human moderation alone is unable to deal with the scale of the issue. What types of AI do you see innovating in this space to help companies keep up with increasing volumes?

Human moderators are such an important part of keeping the Internet safe and free of child abuse imagery, but as noted, the scale of the problem requires innovative solutions (not to mention the psychological toll endured by many content moderators). To address these challenges, many companies use hashing technologies across photos and videos to detect and remove known CSAM. Hashing works by creating a digital “fingerprint” of photos or videos that have been deemed CSAM by a human moderator. These hashes are then stored in various databases that can be used across industry to automatically detect when the content is shared. The high degree of accuracy means less human moderation on content that we already know is violating.

While hashing is excellent at preventing the spread of known CSAM, it cannot detect new photos or videos. This is where true AI comes into play, also known as classifiers. Classifiers use machine learning to automatically detect if content falls into various categories, such as nudity, age, sexual acts, and more. By combining these categories, many companies can make quick decisions about which content should be escalated for review, versus content that does not meet the definition of CSAM.

4. Organizations like NCMEC, Thorn and IWF help with detection of child sexual abusive material (CSAM) but what about child grooming? This is often harder to detect. How can platforms better prepare themselves?

Grooming is a complex topic for many reasons, including the fact that there is no standard definition of what constitutes grooming, and it can vary by platform, language, culture, and other factors. In short, grooming is all about context. A classic example is the phrase “Do you want to meet on Friday?” which could be appropriate on a dating app, but is potentially inappropriate on a children’s gaming platform, again, depending on the context.

As a result, an organization’s approach to grooming detection must evolve over time to accommodate new trends. We typically recommend that companies start with keyword lists,

such as the CSAM Keyword Hub, which the Tech Coalition developed in partnership with THORN. The hub pulls known terms and slang related to online grooming and child sexual abuse so that organizations can begin to filter terms and adjust their content moderation strategies.

Increasingly we see a shift towards using AI to detect grooming by analyzing text (such as in public conversation channels) and by noticing behavioral signals (such as an adult user randomly befriending 100 minors in the span of an hour). The Coalition is working on training grooming classifiers on specific platform use cases, and continues to fund research to understand perpetrators grooming strategies.

5. Smaller platforms tend to have limited resources in dealing with online harms. Be it child safety or hateful content. What advice would you give them?

My primary advice would be to reach out! The Tech Coalition has a robust set of resources, mentorship opportunities, innovative tooling, webinars, and much more to help the smallest companies get started. Additionally, many tech companies like Google and Meta offer free tooling and support to ensure platforms of any size can start mitigating abusive content from being shared on their platform. But if I could offer a true first step, it would be to start learning about the scale and nature of the problem as early as possible. If you allow content to be uploaded or conversations to occur, please consider child safety within your design and product flows to ensure children can use your platform without fear of harm.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Designing for Trust in 2023: How to Create User-Friendly Designs that Keep Users Safe

The Significance of designing for trust in the Digital World In today's digital landscape, building trust with users is essential for operating a business online. Trust is the foundation of successful user interactions and transactions, it is key to encouraging users to share personal information, make purchases, and interact with website content. Without trust, users…
5 minutes

Global Perspective : How AI Content Moderation Differs Across Cultures and Religion

The internet serves as a vast platform for the exchange of ideas, information, and opinions. However, this free exchange also brings challenges, including the need for content moderation to ensure that online spaces remain safe and respectful. As artificial intelligence (AI) increasingly plays a role in content moderation, it becomes essential to recognize the cultural…
5 minutes

TikTok DSA Statement of Reasons (SOR) Statistics

What can we learn from TikTok Statements of Reasons? Body shaming, hypersexualisation, the spread of fake news and misinformation, and the glorification of violence are a high risk on any kind of Social Network. TikTok is one of the fastest growing between 2020 and 2023 and has million of content uploaded everyday on its platform.…
10 minutes

Fake Dating Images: Your Ultimate Moderation Guide

Introduction: Combatting fake dating images to protect your platform With growing number of user concerns highlighting fake dating images to mislead users, dating platforms are facing a growing challenge. These pictures are not only a threat to dating platform's integrity but it also erodes user trusts and exposes companies to reputational and compliance risks. In…
5 minutes

Future Technologies : The Next Generation of AI in Content Moderation 

With the exponential growth of user-generated content on various platforms, the task of ensuring a safe and compliant online environment has become increasingly complex. As we look toward the future, emerging technologies, particularly in the field of artificial intelligence (AI), are poised to revolutionize content moderation and usher in a new era of efficiency and…
3 minutes

How to deal with Fake Dating Profiles on your Platform

Have you seen an increase in fake profiles on your platform? Are you concerned about it becoming a wild west? In this article, we’ll dive into how to protect users from encountering bad actors and create a safer environment for your customers. An Introduction to the Issue Dating apps have transformed the way people interact…
5 minutes

Trust and Safety Teams: Ensuring User Protection

As the internet becomes an integral part of our daily lives, companies must prioritize the safety and security of their users. This responsibility falls on trust and safety teams, whose primary goal is to protect users from fraud, abuse, and other harmful behavior.  Trust and Safety Teams Objectives  The Role of Trust and Safety Teams…
6 minutes

The Impact of AI Content Moderation on User Experience and Engagement

User experience and user engagement are two critical metrics that businesses closely monitor to understand how their products, services, or systems are being received by customers. Now that user-generated content (UGC) is on the rise, content moderation plays a main role in ensuring a safe and positive user experience. Artificial intelligence (AI) has emerged as…
4 minutes

Building Trust and Safety Online: The Power of AI Content Moderation in Community Forums

Community forums are providing spaces for individuals to connect, share ideas, and build relationships. However, maintaining a safe and welcoming environment in these forums is crucial for fostering trust and ensuring the well-being of community members. To address this challenge, many forums are turning to the power of artificial intelligence (AI) content moderation. In this…
3 minutes

Blowing the Whistle on Facebook

Wondering what all the fuss is around the Facebook Papers? Get the lowdown here. A large trove of recently leaked documents from Meta/Facebook promises to keep the social platform in the news, and in hot water, for some time to come. While other recent “Paper” investigations (think Panama and Paradise) have revealed fraud, tax evasion,…
7 minutes

The Role of a Content Moderator: Ensuring Safety and Integrity in the Digital World

In today's digital world, the role of a content moderator is central to ensuring the safety and integrity of online platforms. Content moderators are responsible for reviewing and moderating user-generated content to ensure that it complies with the platform's policies and guidelines, and the laws and regulations. Their work is crucial in creating a safe…
5 minutes

Emerging Threats in AI Content Moderation : Deep Learning and Contextual Analysis 

With the rise of user-generated content across various platforms, artificial intelligence (AI) has played a crucial role in automating the moderation process. However, as AI algorithms become more sophisticated, emerging threats in content moderation are also on the horizon. This article explores two significant challenges: the use of deep learning and contextual analysis in AI…
4 minutes

Educational Content: Enhancing Online Safety with AI

The internet has revolutionized the field of education, offering new resources and opportunities for learning. With the increased reliance on online platforms and digital content, it is now a priority to ensure the safety and security of educational spaces. This is where artificial intelligence (AI) plays a big role. By using the power of AI,…
3 minutes

Checkstep Raises $1.8M Seed Funding to Combat Online Toxicity

Early stage startup gets funding for R&D effort to develop advanced content moderation technology We’re thrilled to announce that Checkstep recently closed a $1.8m seed funding round to further develop our advanced AI product offering contextual content moderation. The round was carefully selected to be diverse, international, and with a significant added value to our business. Influential personalities…
3 minutes

Unmasking Fake Dating Sites: How to Spot and Avoid Scams

In today's digital age, online dating has become increasingly popular, especially with the COVID-19 pandemic limiting traditional in-person interactions. Unfortunately, scammers have taken advantage of this trend, creating fake dating sites to exploit vulnerable individuals. These fraudulent platforms not only deceive users but also put their personal information and finances at risk. In this article,…
5 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert