fbpx

Expert’s Corner with Head of Product Yu-Lan Scholliers

We recently spoke with Checkstep’s newest addition to the team, Yu-Lan Scholliers. She joins Checkstep from her recent position at Facebook where she was a Trust and Safety Specialist. She joins Checkstep as the Head of Operations. We asked her a bit about her background and interests to get to know her a little better.

1. What drew you to the Trust & Safety / Online Safety space?

I would say that the space picked me! A couple of years ago, I wanted to change teams within Meta but had no idea what I wanted to do. The manager of the Trust & Safety team (internally it’s called Community Integrity) presented me with a couple options and when I saw counter-terrorism on the list I immediately knew that’s what I wanted to work on. Later on, I also worked on other areas such as Adult Sexual Exploitation and Suicide & Self-Injury Prevention. I love the Trust & Safety space for mainly two reasons:

  • Helping to build safe communities — the internet is here to stay and I want to contribute to make it a safe space where people can get together, share, interact and learn
  • The variety of challenges — not only technical challenges, but also how to define and refine policies based on learnings and integrate legislative requirements. Above all — the adversarial aspect in some areas is fascinating — this problem is never solved.

2. Why did you choose Checkstep?

Moving to a start-up was a big step for me, after spending my career until now entirely in big corporations:

  • I believe Checkstep is a great product that focuses on aggregating different technologies through both in-house development and through partnerships, together with workflow management, so we can offer clients a one-stop-shop
  • What convinced me was the proven track record of the founders to successfully build and grow start-ups and their mission-driven mindset

I’m also excited to join a start-up as a new personal challenge, bringing my experience in Trust & Safety to this fast-paced and innovative environment, working with some incredible people!

3. What are your interests outside of work, your hobbies etc.?

Aah so many!! I love exploring new places with my partner — 2021 has been my most well-traveled year ever, and I enjoy spending time with my friends. I also started learning how to play the piano, and you can also find me cantering down the countryside on my horse, Picasso. I’m a curious person and love learning new things, so I read a lot of books and listen to podcasts around a variety of topics.

4. Is the upcoming legislation (UK’s Online Safety Bill) a reflection of existing Trust & Safety practices at Big Tech? What do you think the impact is on smaller platforms?

In my opinion, Big Tech has played a major role in shaping online safety regulations — both in the UK and Europe. Due to the sheer size of these networks, the impact of any safety issue on the platform can be severe, hence driving regulators to think about how to deal with them. I believe an open discussion on regulation is welcome; however, the current regulation sometimes stays very vague on some hugely complex issues. Big platforms already have many tools in place, so this will mainly have an impact on smaller platforms as they do not necessarily have the required resources nor want to run the risk of facing jail-time. As a result, I believe some platforms might seriously consider the value of their UK presence or they might want to err on the side of caution which might result in over-enforcement.

5. What do you think are the key indicators of a good trust and safety system?

In general, it’s always a good sign if a platform has a strong cross-functional group of people working on the problem, i.e. policy experts, engineering, product and operations. On a more granular level, the system is designed to both prevent harm from happening and reduce any harm that does occur in a timely and efficient manner, with the ultimate indicator measuring the prevalence of a given harm on a platform, i.e. a platform would want to be able to measure how much harm there is and reduce it over time. However, measuring prevalence can be extremely difficult and a range of other indicators exist that might serve as proxies. I could write a whole book on this, but the best way to learn more about this is to have a look at the transparency reports published by various tech platforms.

6. Do you think women are well-represented in tech? Do you think diversity and the tech industry go hand-in-hand?

No — and neither are other diversity groups. I have been fortunate to have been working with many incredible people, including women, during my time at Meta because the data function was very well balanced in terms of gender; however, I have also worked with engineering teams that were all male. This is due to a combination of supply, i.e. some groups not being able to access education or not believing tech is the right path for them, but also hiring challenges.

I do think the tech industry has the means and capabilities to address this, i.e. by providing access to education and information. Two examples of these efforts I really like are The Female Lead and the Your Life Campaign — both driven by Edwina Dunn. I do believe we need to focus on all-encompassing diversity, focusing on more than just gender, by supporting and inspiring adolescents. By doing so, not only can we lift and grow more people but also ensure that the people who build platforms are culturally diverse and representative of the users on those platforms.

7. Isolation often leads people to take drastic measures, such as self-harm or partaking in online challenges that could be dangerous. Platforms often find such incidents harder to detect. In your opinion, how can platforms be better prepared?

There is a delicate balance between expressing thoughts around mental health, suicide and self-harm and protecting vulnerable users. By keeping such content on a platform, a community can intervene and reach-out while that same content can also be extremely triggering. I believe one of the things a platform can do here is to have a clear and comprehensive policy in place, which can then be translated into detection and safeguarding efforts by product teams while also building tools for the community to raise any issues or risks they perceive on the platform.

8. When the Taliban took control of Afghanistan, they also took control of the national social media channels. But at that point, they were still designated a terrorist organization. Even though no extreme activity has been detected, do you think they should be allowed access to the various social media platforms?

I am against any form of violence, and for me, violence has no place in society, let alone social media. The issue can be very nuanced however, because it would be up for debate whether a fighter using social media for their personal use without explicit affiliation to any entity would be banned. There are often no clear-cut decisions, and it’s very important for platforms to have a clear policy on this, while also ensuring there is a feedback loop in place. Designating organizations can be very polarizing and politically charged, and it’s important platforms are aware of the impact this can have.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Fake Dating Pictures: A Comprehensive Guide to Identifying and Managing 

In the world of online dating, fake dating pictures are harmful, as pictures play a crucial role in making a strong first impression. However, not all dating pictures are created equal. There is a growing concern about fake profiles using deceptive or doctored images.  To navigate the online dating landscape successfully, it's important to know
5 minutes

What is Content Moderation? 

Content moderation is the strategic process of evaluating, filtering, and regulating user-generated content on digital ecosystems. It plays a crucial role in fostering a safe and positive user experience by removing or restricting content that violates community guidelines, is harmful, or could offend users. An effective content moderation system is designed to strike a delicate
5 minutes

Transforming Text Moderation with Content Moderation AI

In today's interconnected world, text-based communication has become a fundamental part of our daily lives. However, with the exponential growth of user-generated text content on digital platforms, ensuring a safe and inclusive online environment has become a daunting task. Text moderation plays a critical role in filtering and managing user-generated content to prevent harmful or
4 minutes

Streamline Audio Moderation with the Power of AI

In today's digitally-driven world, audio content has become an integral part of online platforms, ranging from podcasts and audiobooks to user-generated audio clips on social media. With the increasing volume of audio content being generated daily, audio moderation has become a critical aspect of maintaining a safe and positive user experience. Audio moderation involves systematically
4 minutes

It’s Scale or Fail with AI in Video Moderation

In the digital age, video content has become a driving force across online platforms, shaping the way we communicate, entertain, and share experiences. With this exponential growth, content moderation has become a critical aspect of maintaining a safe and inclusive online environment. The sheer volume of user-generated videos poses significant challenges for platforms, necessitating advanced
4 minutes

Enable and Scale AI for Podcast Moderation

The podcasting industry has experienced an explosive growth in recent years, with millions of episodes being published across various platforms every day. As the volume of audio content surges, ensuring a safe and trustworthy podcast environment becomes a paramount concern. Podcast moderation plays a crucial role in filtering and managing podcast episodes to prevent the
4 minutes

Ready or Not, AI Is Coming to Content Moderation

As digital platforms and online communities continue to grow, content moderation becomes increasingly critical to ensure safe and positive user experiences. Manual content moderation by human moderators is effective but often falls short when dealing with the scale and complexity of user-generated content. Ready or not, AI is coming to content moderation operations, revolutionizing the
5 minutes

How to Protect the Mental Health of Content Moderators? 

Content moderation has become an essential aspect of managing online platforms and ensuring a safe user experience. Behind the scenes, content moderators play a crucial role in reviewing user-generated content, filtering out harmful or inappropriate materials, and upholding community guidelines. However, the task of content moderation is not without its challenges, as it exposes moderators
4 minutes

Scaling Content Moderation Through AI Pays Off, No Matter the Investment

In the rapidly evolving digital landscape, user-generated content has become the lifeblood of online platforms, from social media giants to e-commerce websites. With the surge in content creation, content moderation has become a critical aspect of maintaining a safe and reputable online environment. As the volume of user-generated content continues to grow, manual content moderation
4 minutes

Overhaul Image Moderation with the Power of AI

In today's digital world, visual content plays a significant role in online platforms, ranging from social media to e-commerce websites. With the exponential growth of user-generated images, ensuring a safe and inclusive user experience has become a paramount concern for platform owners. However, image moderation poses unique challenges due to the sheer volume, diverse content,
4 minutes

Outsourcing Content Moderation

Outsourcing content moderation has become an essential aspect of managing online platforms in the digital age. With the exponential growth of user-generated content, businesses are faced with the challenge of maintaining a safe and inclusive environment for their users while protecting their brand reputation. To address this, many companies are turning to outsourcing content moderation
4 minutes

Designing for Trust in 2023: How to Create User-Friendly Designs that Keep Users Safe

The Significance of designing for trust in the Digital World In today's digital landscape, building trust with users is essential for operating a business online. Trust is the foundation of successful user interactions and transactions, it is key to encouraging users to share personal information, make purchases, and interact with website content. Without trust, users
5 minutes

Content Moderation: A Comprehensive Guide

Content moderation is a crucial aspect of managing online platforms and communities. It involves the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. To navigate this landscape effectively, it's essential to understand the terminology associated with content moderation. In this article, we'll delve into a comprehensive glossary
7 minutes

How Predators Are Abusing Generative AI

The recent rise of generative AI has revolutionized various industries, including Trust and Safety. However, this technological advancement generates new problems. Predators have found ways to abuse generative AI, using it to carry out horrible acts such as child sex abuse material (CSAM), disinformation, fraud, and extremism. In this article, we will explore how predators
4 minutes

The Future of Dating: Embracing Video to Connect and Thrive

‍In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In
4 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert