Expert’s Corner with T&S Expert Manpreet Singh

Our featured expert this month is someone who has been working hard to keep people safe online.

Manpreet Singh is a Trust and Safety professional working in London, UK. She has developed her interest in Child Online Safety and Tech Policy through a myriad of experiences, including working as a policy researcher at a leading children’s charity in London and taking part in a multi-stakeholder project examining children and age assurance in the digital age. Her interests include safety by design, data protection, diverse and inclusive content moderation, and reducing online harms. The answers below are not intended to represent any current or past institutional affiliations.

  1. The pandemic definitely put a stop to outdoor activities that children would likely partake in, thereby re-shifting their focus to online activities such as gaming or participating in various TikTok challenges. Some of these challenges can be particularly dangerous, e.g. Milk Crate Challenge. This is something platforms can’t anticipate or prepare themselves against. With such emerging trends, how can platforms better prepare themselves?

Childhood is increasingly taking place online, and many experts have asserted that the pandemic has accelerated this change. Participating in online activities such as gaming and social media have helped many children cope with frightening and sudden changes to their daily routine, but as you have pointed out, some of these activities can be dangerous.

However, dangerous challenges (or challenges in general) are not a unique feature of the digital world — many ‘offline’ challenges children partake in can be risky, even ones as innocuous as the age-old ding dong ditch. That being said, I do believe that online platforms can adopt safety-by-design approaches to help protect their users. There are a few underlying factors which can make online challenges very risky for children in particular: the fact that children may not be aware of the risks and consequences of participating, the low barriers/friction to participate, and importantly the incentives to participate due to engagement-based ranking, which are unique to online challenges.

Recently, TikTok announced an expansion to their efforts to combat dangerous challenges, introducing measures such as a dedicated policy category within its Community Guidelines, a 4-step campaign to encourage users to “stop, think, decide, act”, and a reporting menu so users can report problematic challenges. This is a good example of how platforms can help combat children not understanding the consequences of participating in a viral challenge.

I am more concerned with the second and third factors I’ve identified. The viral nature of these online challenges are in part due to how easy it is for a child to watch a video of someone else attempting it, and then immediately hit the record button. The Milk Crate Challenge in particular may have some barriers to entry — it’s not likely that a child will have access to that many milk crates immediately — but other dangerous challenges circulating have virtually no time or elaborate prop requirements. Moreover, collaborative features provided by a platform, such as the ‘duet’ on TikTok, make it even easier and tempting for a child to record their own attempts. This is another area where safety-by-design can mitigate risks. Again, I’ll point to some changes TikTok have made to help combat how easy it is to react to and take part in challenges– TikTok users between the ages of 13 and 15 do not have access to such collaborative features, and users 16 and 17 will only be able to ‘stitch’ their videos with Friends by default. These default settings can raise the ‘barrier of entry’ to these challenges just enough so that children may decide it is simply not worth the hassle of configuring settings just to share their own attempt more widely.

Anyone who has been keeping an eye on digital/tech news over the past few months will not be a stranger to the phrase ‘engagement-based ranking’, and for good reason: it plays a huge role in understanding why children continue to participate in these trending but dangerous challenges. These online challenges, accompanied with hashtags, can garner children hundreds or thousands of likes. This engagement — which in many ways is viewed by children as a tracer of popularity or social standing — can often incentivise them to take actions that they might otherwise decide against. Unfortunately, I don’t have a concrete answer for how platforms can mitigate this risk without re-thinking the role of engagement-based ranking, but I hope that platforms are inspired to explore other methods, such as Twitter allowing users to revert back to a reverse-chronological timeline.

2. As upcoming legislation is largely a reflection of existing trust and safety practices at Big Tech? Does the introduction of this legislation and resulting burden of implementation stifle innovation or disadvantage challenger social networks?

Regulation is not always a constraint. It can be an opportunity to even the playing field, drive tech companies towards new innovation, or force them to meet the changing expectations and demands users have on the services and platforms they use. The right legislation could lead to the development of innovative products and services; for example, it wouldn’t surprise me if in the next few years we saw a growth in 3rd party apps to help people manage their digital footprint, or browser extensions that help users scan for potentially misleading information. I do see this as an opportunity for newer players to introduce novel approaches and move the field forward.

At the same time, I worry that Big Tech may be better equipped to adapt to changes than existing smaller platforms and startups. It takes a lot of manpower and expertise to have content moderation policies that can adapt to new abuse trends or have dedicated data protection analysts to conduct impact assessments, all while trying to attract new users. One solution is to have different tiers of regulation based on the size of the platform, similar to the EU’s approach in the Digital Services Act, but given that any new startup can go viral overnight, it’s not clear how well this would work in the long run.

I am also seeing many new players entering the field which challenge how we categorise what a ‘social network’ is, for example the audio-only Clubhouse. I would like to see upcoming legislation encompass platforms more broadly, even thinking beyond distinctions of ‘user-generated content’ vs. not.

3. How do we create online spaces or content moderation policies with diverse users in mind? For example, children may require certain extra protections online, LGBTQ users, non-native English speakers, or those with accessibility needs.

This is an extremely important question and one that I am really passionate about. I think the digital world can be brilliant at providing information, community, or a safe space for different groups of people. I turn to stories of Google Maps for helping show wheelchair accessible places, or social media being used as a safer-alternative to dating apps for young people in the LGBTQ community. It’s all the more important that as T&S professionals, we keep platforms safe for everyone.

It can be difficult to factor in, or even be aware of, the concerns of various groups when those creating content moderation policies are themselves, homogenous. This is why it’s crucial that T&S professionals should strive to learn about — or be representative of — diverse groups, so content moderation can be inclusive of many different ideas of safety. For example, it is very easy for platforms to use a definition of image-based abuse or ‘non-consensual intimate images’ (NCII) to mean a nude or otherwise sexually explicit image shared of a user without their consent. But for women who wear a Hijab, an image distributed of them without one without their consent must also be viewed as NCII.

There is no “one size fits all” solution when it comes to trust and safety, and so platforms will need to devote more time and resources towards this area than they may have in the past. But there are general things that will help keep many users safe, for example, more (data) privacy, more options for users to tailor their experiences, and by providing safety tools and resources in many languages.

4. As moderation efforts depend more on AI, is there a risk of negative unintended consequences? We’ve already seen harm against marginalised groups with technology solutions that interfere with free expression. For example, how should platforms balance scalability, accuracy and protecting human rights?

There is absolutely a risk of negative unintended consequences, and I would be nervous to see moderation efforts trend towards prioritising AI-based solutions over a mix of AI and human moderation. AI is a tool, and it can very easily emphasize existing biases we may have. I’m also hesitant about turning to AI for questions about misinformation, especially political misinformation. It’s not clear who users should be looking to to be the authority on what information is accurate, especially when many turn to online platforms to protest and find solidarity against systemic injustices. This definitely can have a concrete impact on free expression.

Some ideas for how platforms can balance scalability, accuracy and upholding human rights is by having:

  • Clearly defined content moderation policies — this will embolden users to understand what is and isn’t allowed up front
  • Mechanisms for redress for users to challenge moderation decisions — moderation will never be perfect and it’s important that users have the opportunity to speak to an agent when something goes wrong
  • Regularly published transparency reports — including metrics for when governments request content to be taken down and opportunities for improvement
  • Regularly conducted impact assessments — it’s important that these impact assessments are undertaken both before and after the launch of a new product or service
  • Trust and Safety experts included in the design process — as opposed to being brought on after the service “takes off” or a new product is launched. This is the difference between being reactive and proactive
  • AI tools developed by scientists of different backgrounds — or using representative data sets to train models

5. What can tech companies do, if anything, to attract a more diverse workforce?

I’m really optimistic about a more diverse tech workforce after seeing many big companies adopt more flexible working patterns. Expanding paid holidays, sick leave, adoption, and bereavement policies, as Bumble did in 2021, are also fantastic steps towards accommodating brilliant people from underrepresented groups. It’s important to engrain diversity and inclusion in the organisational structure of these companies, for example, having dedicated T&S teams working together all over the world, rather than having ‘hubs’ only located in specific regions. This will mean it’s more likely tech companies won’t be enforcing a purely western view of online safety.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

AI Ethics Expert’s Corner : Kyle Dent, Head of AI Ethics

This month we’ve added a new “Expert’s Corner” feature starting with an interview with our own Kyle Dent, who recently joined Checkstep. He answers questions about AI ethics and some of the challenges of content moderation. AI Ethics FAQ with Kyle Dent If you would like to catch up on other thought leadership pieces by…
4 minutes

Scaling Content Moderation Through AI Pays Off, No Matter the Investment

In the rapidly evolving digital landscape, user-generated content has become the lifeblood of online platforms, from social media giants to e-commerce websites. With the surge in content creation, content moderation has become a critical aspect of maintaining a safe and reputable online environment. As the volume of user-generated content continues to grow, manual content moderation…
4 minutes

The Future of AI-Powered Content Moderation: Careers and Opportunities

As companies are grappling with the challenge of ensuring user safety and creating a welcoming environment: AI-powered content moderation has emerged as a powerful solution, revolutionizing the way organizations approach this task. In this article, we will explore the careers and opportunities that AI-powered content moderation presents, and how individuals and businesses can adapt to…
6 minutes

How to Build a Safe Social Media Platform without Sacrificing the User’s Freedom

It was once unthinkable that social media would become an integral aspect of daily life, but here we are, relying on it for communication, information, entertainment, and even shaping our social interactions. It’s brought to our lives a whole new set of rules, and now that online duality is expected, the balance between safety and…
6 minutes

Image Moderation Guide: Discover the Power of AI

In today's digital world, visual content plays a significant role in online platforms, ranging from social media to e-commerce websites. With the exponential growth of user-generated images, ensuring a safe and inclusive user experience has become a paramount concern for platform owners. However, image moderation poses unique challenges due to the sheer volume, diverse content,…
4 minutes

The Psychology Behind AI Content Moderation: Understanding User Behavior

Social media platforms are experiencing exponential growth, with billions of users actively engaging in content creation and sharing. As the volume of user-generated content continues to rise, the challenge of content moderation becomes increasingly complex. To address this challenge, artificial intelligence (AI) has emerged as a powerful tool for automating the moderation process. However, user…
5 minutes

From Trolls to Fair Play: The Transformative Impact of AI Moderation in Gaming

The Online Battlefield The online gaming community, once a haven for enthusiasts to connect and share their passion, has faced the growing challenge of toxic behaviour and harassment. Teenagers and young adults are still the main demographic of players, and as multiplayer games became more popular, so did instances of trolling, hate speech, and other…
4 minutes

Outsourcing Content Moderation

Outsourcing content moderation has become an essential aspect of managing online platforms in the digital age. With the exponential growth of user-generated content, businesses are faced with the challenge of maintaining a safe and inclusive environment for their users while protecting their brand reputation. To address this, many companies are turning to outsourcing content moderation…
4 minutes

Ethical Consideration in AI Content Moderation : Avoiding Censorship and Biais

Artificial Intelligence has revolutionized various aspects of our lives, including content moderation on online platforms. As the volume of digital content continues to grow exponentially, AI algorithms play a crucial role in filtering and managing this content. However, with great power comes great responsibility, and the ethical considerations surrounding AI content moderation are becoming increasingly…
3 minutes

The Evolution of Online Communication: Cultivating Safe and Respectful Interactions

What was once an outrageous dream is now a mundane reality. Going from in-person communication to being able to hold a conversation from thousands of kilometres away has been nothing short of revolutionary. From the invention of email to the meteoric rise of social media and video conferencing, the ways we connect, share, and interact…
5 minutes

Designing for Trust in 2023: How to Create User-Friendly Designs that Keep Users Safe

The Significance of designing for trust in the Digital World In today's digital landscape, building trust with users is essential for operating a business online. Trust is the foundation of successful user interactions and transactions, it is key to encouraging users to share personal information, make purchases, and interact with website content. Without trust, users…
5 minutes

Global Perspective : How AI Content Moderation Differs Across Cultures and Religion

The internet serves as a vast platform for the exchange of ideas, information, and opinions. However, this free exchange also brings challenges, including the need for content moderation to ensure that online spaces remain safe and respectful. As artificial intelligence (AI) increasingly plays a role in content moderation, it becomes essential to recognize the cultural…
5 minutes

TikTok DSA Statement of Reasons (SOR) Statistics

What can we learn from TikTok Statements of Reasons? Body shaming, hypersexualisation, the spread of fake news and misinformation, and the glorification of violence are a high risk on any kind of Social Network. TikTok is one of the fastest growing between 2020 and 2023 and has million of content uploaded everyday on its platform.…
10 minutes

Fake Dating Images: Your Ultimate Moderation Guide

Introduction: Combatting fake dating images to protect your platform With growing number of user concerns highlighting fake dating images to mislead users, dating platforms are facing a growing challenge. These pictures are not only a threat to dating platform's integrity but it also erodes user trusts and exposes companies to reputational and compliance risks. In…
5 minutes

Future Technologies : The Next Generation of AI in Content Moderation 

With the exponential growth of user-generated content on various platforms, the task of ensuring a safe and compliant online environment has become increasingly complex. As we look toward the future, emerging technologies, particularly in the field of artificial intelligence (AI), are poised to revolutionize content moderation and usher in a new era of efficiency and…
3 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert