fbpx

Expert’s Corner with T&S Expert Manpreet Singh

Our featured expert this month is someone who has been working hard to keep people safe online.

Manpreet Singh is a Trust and Safety professional working in London, UK. She has developed her interest in Child Online Safety and Tech Policy through a myriad of experiences, including working as a policy researcher at a leading children’s charity in London and taking part in a multi-stakeholder project examining children and age assurance in the digital age. Her interests include safety by design, data protection, diverse and inclusive content moderation, and reducing online harms. The answers below are not intended to represent any current or past institutional affiliations.

  1. The pandemic definitely put a stop to outdoor activities that children would likely partake in, thereby re-shifting their focus to online activities such as gaming or participating in various TikTok challenges. Some of these challenges can be particularly dangerous, e.g. Milk Crate Challenge. This is something platforms can’t anticipate or prepare themselves against. With such emerging trends, how can platforms better prepare themselves?

Childhood is increasingly taking place online, and many experts have asserted that the pandemic has accelerated this change. Participating in online activities such as gaming and social media have helped many children cope with frightening and sudden changes to their daily routine, but as you have pointed out, some of these activities can be dangerous.

However, dangerous challenges (or challenges in general) are not a unique feature of the digital world — many ‘offline’ challenges children partake in can be risky, even ones as innocuous as the age-old ding dong ditch. That being said, I do believe that online platforms can adopt safety-by-design approaches to help protect their users. There are a few underlying factors which can make online challenges very risky for children in particular: the fact that children may not be aware of the risks and consequences of participating, the low barriers/friction to participate, and importantly the incentives to participate due to engagement-based ranking, which are unique to online challenges.

Recently, TikTok announced an expansion to their efforts to combat dangerous challenges, introducing measures such as a dedicated policy category within its Community Guidelines, a 4-step campaign to encourage users to “stop, think, decide, act”, and a reporting menu so users can report problematic challenges. This is a good example of how platforms can help combat children not understanding the consequences of participating in a viral challenge.

I am more concerned with the second and third factors I’ve identified. The viral nature of these online challenges are in part due to how easy it is for a child to watch a video of someone else attempting it, and then immediately hit the record button. The Milk Crate Challenge in particular may have some barriers to entry — it’s not likely that a child will have access to that many milk crates immediately — but other dangerous challenges circulating have virtually no time or elaborate prop requirements. Moreover, collaborative features provided by a platform, such as the ‘duet’ on TikTok, make it even easier and tempting for a child to record their own attempts. This is another area where safety-by-design can mitigate risks. Again, I’ll point to some changes TikTok have made to help combat how easy it is to react to and take part in challenges– TikTok users between the ages of 13 and 15 do not have access to such collaborative features, and users 16 and 17 will only be able to ‘stitch’ their videos with Friends by default. These default settings can raise the ‘barrier of entry’ to these challenges just enough so that children may decide it is simply not worth the hassle of configuring settings just to share their own attempt more widely.

Anyone who has been keeping an eye on digital/tech news over the past few months will not be a stranger to the phrase ‘engagement-based ranking’, and for good reason: it plays a huge role in understanding why children continue to participate in these trending but dangerous challenges. These online challenges, accompanied with hashtags, can garner children hundreds or thousands of likes. This engagement — which in many ways is viewed by children as a tracer of popularity or social standing — can often incentivise them to take actions that they might otherwise decide against. Unfortunately, I don’t have a concrete answer for how platforms can mitigate this risk without re-thinking the role of engagement-based ranking, but I hope that platforms are inspired to explore other methods, such as Twitter allowing users to revert back to a reverse-chronological timeline.

2. As upcoming legislation is largely a reflection of existing trust and safety practices at Big Tech? Does the introduction of this legislation and resulting burden of implementation stifle innovation or disadvantage challenger social networks?

Regulation is not always a constraint. It can be an opportunity to even the playing field, drive tech companies towards new innovation, or force them to meet the changing expectations and demands users have on the services and platforms they use. The right legislation could lead to the development of innovative products and services; for example, it wouldn’t surprise me if in the next few years we saw a growth in 3rd party apps to help people manage their digital footprint, or browser extensions that help users scan for potentially misleading information. I do see this as an opportunity for newer players to introduce novel approaches and move the field forward.

At the same time, I worry that Big Tech may be better equipped to adapt to changes than existing smaller platforms and startups. It takes a lot of manpower and expertise to have content moderation policies that can adapt to new abuse trends or have dedicated data protection analysts to conduct impact assessments, all while trying to attract new users. One solution is to have different tiers of regulation based on the size of the platform, similar to the EU’s approach in the Digital Services Act, but given that any new startup can go viral overnight, it’s not clear how well this would work in the long run.

I am also seeing many new players entering the field which challenge how we categorise what a ‘social network’ is, for example the audio-only Clubhouse. I would like to see upcoming legislation encompass platforms more broadly, even thinking beyond distinctions of ‘user-generated content’ vs. not.

3. How do we create online spaces or content moderation policies with diverse users in mind? For example, children may require certain extra protections online, LGBTQ users, non-native English speakers, or those with accessibility needs.

This is an extremely important question and one that I am really passionate about. I think the digital world can be brilliant at providing information, community, or a safe space for different groups of people. I turn to stories of Google Maps for helping show wheelchair accessible places, or social media being used as a safer-alternative to dating apps for young people in the LGBTQ community. It’s all the more important that as T&S professionals, we keep platforms safe for everyone.

It can be difficult to factor in, or even be aware of, the concerns of various groups when those creating content moderation policies are themselves, homogenous. This is why it’s crucial that T&S professionals should strive to learn about — or be representative of — diverse groups, so content moderation can be inclusive of many different ideas of safety. For example, it is very easy for platforms to use a definition of image-based abuse or ‘non-consensual intimate images’ (NCII) to mean a nude or otherwise sexually explicit image shared of a user without their consent. But for women who wear a Hijab, an image distributed of them without one without their consent must also be viewed as NCII.

There is no “one size fits all” solution when it comes to trust and safety, and so platforms will need to devote more time and resources towards this area than they may have in the past. But there are general things that will help keep many users safe, for example, more (data) privacy, more options for users to tailor their experiences, and by providing safety tools and resources in many languages.

4. As moderation efforts depend more on AI, is there a risk of negative unintended consequences? We’ve already seen harm against marginalised groups with technology solutions that interfere with free expression. For example, how should platforms balance scalability, accuracy and protecting human rights?

There is absolutely a risk of negative unintended consequences, and I would be nervous to see moderation efforts trend towards prioritising AI-based solutions over a mix of AI and human moderation. AI is a tool, and it can very easily emphasize existing biases we may have. I’m also hesitant about turning to AI for questions about misinformation, especially political misinformation. It’s not clear who users should be looking to to be the authority on what information is accurate, especially when many turn to online platforms to protest and find solidarity against systemic injustices. This definitely can have a concrete impact on free expression.

Some ideas for how platforms can balance scalability, accuracy and upholding human rights is by having:

  • Clearly defined content moderation policies — this will embolden users to understand what is and isn’t allowed up front
  • Mechanisms for redress for users to challenge moderation decisions — moderation will never be perfect and it’s important that users have the opportunity to speak to an agent when something goes wrong
  • Regularly published transparency reports — including metrics for when governments request content to be taken down and opportunities for improvement
  • Regularly conducted impact assessments — it’s important that these impact assessments are undertaken both before and after the launch of a new product or service
  • Trust and Safety experts included in the design process — as opposed to being brought on after the service “takes off” or a new product is launched. This is the difference between being reactive and proactive
  • AI tools developed by scientists of different backgrounds — or using representative data sets to train models

5. What can tech companies do, if anything, to attract a more diverse workforce?

I’m really optimistic about a more diverse tech workforce after seeing many big companies adopt more flexible working patterns. Expanding paid holidays, sick leave, adoption, and bereavement policies, as Bumble did in 2021, are also fantastic steps towards accommodating brilliant people from underrepresented groups. It’s important to engrain diversity and inclusion in the organisational structure of these companies, for example, having dedicated T&S teams working together all over the world, rather than having ‘hubs’ only located in specific regions. This will mean it’s more likely tech companies won’t be enforcing a purely western view of online safety.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Fake Dating Pictures: A Comprehensive Guide to Identifying and Managing 

In the world of online dating, fake dating pictures are harmful, as pictures play a crucial role in making a strong first impression. However, not all dating pictures are created equal. There is a growing concern about fake profiles using deceptive or doctored images.  To navigate the online dating landscape successfully, it's important to know
5 minutes

What is Content Moderation? 

Content moderation is the strategic process of evaluating, filtering, and regulating user-generated content on digital ecosystems. It plays a crucial role in fostering a safe and positive user experience by removing or restricting content that violates community guidelines, is harmful, or could offend users. An effective content moderation system is designed to strike a delicate
5 minutes

Transforming Text Moderation with Content Moderation AI

In today's interconnected world, text-based communication has become a fundamental part of our daily lives. However, with the exponential growth of user-generated text content on digital platforms, ensuring a safe and inclusive online environment has become a daunting task. Text moderation plays a critical role in filtering and managing user-generated content to prevent harmful or
4 minutes

Streamline Audio Moderation with the Power of AI

In today's digitally-driven world, audio content has become an integral part of online platforms, ranging from podcasts and audiobooks to user-generated audio clips on social media. With the increasing volume of audio content being generated daily, audio moderation has become a critical aspect of maintaining a safe and positive user experience. Audio moderation involves systematically
4 minutes

It’s Scale or Fail with AI in Video Moderation

In the digital age, video content has become a driving force across online platforms, shaping the way we communicate, entertain, and share experiences. With this exponential growth, content moderation has become a critical aspect of maintaining a safe and inclusive online environment. The sheer volume of user-generated videos poses significant challenges for platforms, necessitating advanced
4 minutes

Enable and Scale AI for Podcast Moderation

The podcasting industry has experienced an explosive growth in recent years, with millions of episodes being published across various platforms every day. As the volume of audio content surges, ensuring a safe and trustworthy podcast environment becomes a paramount concern. Podcast moderation plays a crucial role in filtering and managing podcast episodes to prevent the
4 minutes

Ready or Not, AI Is Coming to Content Moderation

As digital platforms and online communities continue to grow, content moderation becomes increasingly critical to ensure safe and positive user experiences. Manual content moderation by human moderators is effective but often falls short when dealing with the scale and complexity of user-generated content. Ready or not, AI is coming to content moderation operations, revolutionizing the
5 minutes

How to Protect the Mental Health of Content Moderators? 

Content moderation has become an essential aspect of managing online platforms and ensuring a safe user experience. Behind the scenes, content moderators play a crucial role in reviewing user-generated content, filtering out harmful or inappropriate materials, and upholding community guidelines. However, the task of content moderation is not without its challenges, as it exposes moderators
4 minutes

Scaling Content Moderation Through AI Pays Off, No Matter the Investment

In the rapidly evolving digital landscape, user-generated content has become the lifeblood of online platforms, from social media giants to e-commerce websites. With the surge in content creation, content moderation has become a critical aspect of maintaining a safe and reputable online environment. As the volume of user-generated content continues to grow, manual content moderation
4 minutes

Overhaul Image Moderation with the Power of AI

In today's digital world, visual content plays a significant role in online platforms, ranging from social media to e-commerce websites. With the exponential growth of user-generated images, ensuring a safe and inclusive user experience has become a paramount concern for platform owners. However, image moderation poses unique challenges due to the sheer volume, diverse content,
4 minutes

Outsourcing Content Moderation

Outsourcing content moderation has become an essential aspect of managing online platforms in the digital age. With the exponential growth of user-generated content, businesses are faced with the challenge of maintaining a safe and inclusive environment for their users while protecting their brand reputation. To address this, many companies are turning to outsourcing content moderation
4 minutes

Designing for Trust in 2023: How to Create User-Friendly Designs that Keep Users Safe

The Significance of designing for trust in the Digital World In today's digital landscape, building trust with users is essential for operating a business online. Trust is the foundation of successful user interactions and transactions, it is key to encouraging users to share personal information, make purchases, and interact with website content. Without trust, users
5 minutes

Content Moderation: A Comprehensive Guide

Content moderation is a crucial aspect of managing online platforms and communities. It involves the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. To navigate this landscape effectively, it's essential to understand the terminology associated with content moderation. In this article, we'll delve into a comprehensive glossary
7 minutes

How Predators Are Abusing Generative AI

The recent rise of generative AI has revolutionized various industries, including Trust and Safety. However, this technological advancement generates new problems. Predators have found ways to abuse generative AI, using it to carry out horrible acts such as child sex abuse material (CSAM), disinformation, fraud, and extremism. In this article, we will explore how predators
4 minutes

The Future of Dating: Embracing Video to Connect and Thrive

‍In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In
4 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert