fbpx

Expert’s Corner with T&S Expert Manpreet Singh

Our featured expert this month is someone who has been working hard to keep people safe online.

Manpreet Singh is a Trust and Safety professional working in London, UK. She has developed her interest in Child Online Safety and Tech Policy through a myriad of experiences, including working as a policy researcher at a leading children’s charity in London and taking part in a multi-stakeholder project examining children and age assurance in the digital age. Her interests include safety by design, data protection, diverse and inclusive content moderation, and reducing online harms. The answers below are not intended to represent any current or past institutional affiliations.

  1. The pandemic definitely put a stop to outdoor activities that children would likely partake in, thereby re-shifting their focus to online activities such as gaming or participating in various TikTok challenges. Some of these challenges can be particularly dangerous, e.g. Milk Crate Challenge. This is something platforms can’t anticipate or prepare themselves against. With such emerging trends, how can platforms better prepare themselves?

Childhood is increasingly taking place online, and many experts have asserted that the pandemic has accelerated this change. Participating in online activities such as gaming and social media have helped many children cope with frightening and sudden changes to their daily routine, but as you have pointed out, some of these activities can be dangerous.

However, dangerous challenges (or challenges in general) are not a unique feature of the digital world — many ‘offline’ challenges children partake in can be risky, even ones as innocuous as the age-old ding dong ditch. That being said, I do believe that online platforms can adopt safety-by-design approaches to help protect their users. There are a few underlying factors which can make online challenges very risky for children in particular: the fact that children may not be aware of the risks and consequences of participating, the low barriers/friction to participate, and importantly the incentives to participate due to engagement-based ranking, which are unique to online challenges.

Recently, TikTok announced an expansion to their efforts to combat dangerous challenges, introducing measures such as a dedicated policy category within its Community Guidelines, a 4-step campaign to encourage users to “stop, think, decide, act”, and a reporting menu so users can report problematic challenges. This is a good example of how platforms can help combat children not understanding the consequences of participating in a viral challenge.

I am more concerned with the second and third factors I’ve identified. The viral nature of these online challenges are in part due to how easy it is for a child to watch a video of someone else attempting it, and then immediately hit the record button. The Milk Crate Challenge in particular may have some barriers to entry — it’s not likely that a child will have access to that many milk crates immediately — but other dangerous challenges circulating have virtually no time or elaborate prop requirements. Moreover, collaborative features provided by a platform, such as the ‘duet’ on TikTok, make it even easier and tempting for a child to record their own attempts. This is another area where safety-by-design can mitigate risks. Again, I’ll point to some changes TikTok have made to help combat how easy it is to react to and take part in challenges– TikTok users between the ages of 13 and 15 do not have access to such collaborative features, and users 16 and 17 will only be able to ‘stitch’ their videos with Friends by default. These default settings can raise the ‘barrier of entry’ to these challenges just enough so that children may decide it is simply not worth the hassle of configuring settings just to share their own attempt more widely.

Anyone who has been keeping an eye on digital/tech news over the past few months will not be a stranger to the phrase ‘engagement-based ranking’, and for good reason: it plays a huge role in understanding why children continue to participate in these trending but dangerous challenges. These online challenges, accompanied with hashtags, can garner children hundreds or thousands of likes. This engagement — which in many ways is viewed by children as a tracer of popularity or social standing — can often incentivise them to take actions that they might otherwise decide against. Unfortunately, I don’t have a concrete answer for how platforms can mitigate this risk without re-thinking the role of engagement-based ranking, but I hope that platforms are inspired to explore other methods, such as Twitter allowing users to revert back to a reverse-chronological timeline.

2. As upcoming legislation is largely a reflection of existing trust and safety practices at Big Tech? Does the introduction of this legislation and resulting burden of implementation stifle innovation or disadvantage challenger social networks?

Regulation is not always a constraint. It can be an opportunity to even the playing field, drive tech companies towards new innovation, or force them to meet the changing expectations and demands users have on the services and platforms they use. The right legislation could lead to the development of innovative products and services; for example, it wouldn’t surprise me if in the next few years we saw a growth in 3rd party apps to help people manage their digital footprint, or browser extensions that help users scan for potentially misleading information. I do see this as an opportunity for newer players to introduce novel approaches and move the field forward.

At the same time, I worry that Big Tech may be better equipped to adapt to changes than existing smaller platforms and startups. It takes a lot of manpower and expertise to have content moderation policies that can adapt to new abuse trends or have dedicated data protection analysts to conduct impact assessments, all while trying to attract new users. One solution is to have different tiers of regulation based on the size of the platform, similar to the EU’s approach in the Digital Services Act, but given that any new startup can go viral overnight, it’s not clear how well this would work in the long run.

I am also seeing many new players entering the field which challenge how we categorise what a ‘social network’ is, for example the audio-only Clubhouse. I would like to see upcoming legislation encompass platforms more broadly, even thinking beyond distinctions of ‘user-generated content’ vs. not.

3. How do we create online spaces or content moderation policies with diverse users in mind? For example, children may require certain extra protections online, LGBTQ users, non-native English speakers, or those with accessibility needs.

This is an extremely important question and one that I am really passionate about. I think the digital world can be brilliant at providing information, community, or a safe space for different groups of people. I turn to stories of Google Maps for helping show wheelchair accessible places, or social media being used as a safer-alternative to dating apps for young people in the LGBTQ community. It’s all the more important that as T&S professionals, we keep platforms safe for everyone.

It can be difficult to factor in, or even be aware of, the concerns of various groups when those creating content moderation policies are themselves, homogenous. This is why it’s crucial that T&S professionals should strive to learn about — or be representative of — diverse groups, so content moderation can be inclusive of many different ideas of safety. For example, it is very easy for platforms to use a definition of image-based abuse or ‘non-consensual intimate images’ (NCII) to mean a nude or otherwise sexually explicit image shared of a user without their consent. But for women who wear a Hijab, an image distributed of them without one without their consent must also be viewed as NCII.

There is no “one size fits all” solution when it comes to trust and safety, and so platforms will need to devote more time and resources towards this area than they may have in the past. But there are general things that will help keep many users safe, for example, more (data) privacy, more options for users to tailor their experiences, and by providing safety tools and resources in many languages.

4. As moderation efforts depend more on AI, is there a risk of negative unintended consequences? We’ve already seen harm against marginalised groups with technology solutions that interfere with free expression. For example, how should platforms balance scalability, accuracy and protecting human rights?

There is absolutely a risk of negative unintended consequences, and I would be nervous to see moderation efforts trend towards prioritising AI-based solutions over a mix of AI and human moderation. AI is a tool, and it can very easily emphasize existing biases we may have. I’m also hesitant about turning to AI for questions about misinformation, especially political misinformation. It’s not clear who users should be looking to to be the authority on what information is accurate, especially when many turn to online platforms to protest and find solidarity against systemic injustices. This definitely can have a concrete impact on free expression.

Some ideas for how platforms can balance scalability, accuracy and upholding human rights is by having:

  • Clearly defined content moderation policies — this will embolden users to understand what is and isn’t allowed up front
  • Mechanisms for redress for users to challenge moderation decisions — moderation will never be perfect and it’s important that users have the opportunity to speak to an agent when something goes wrong
  • Regularly published transparency reports — including metrics for when governments request content to be taken down and opportunities for improvement
  • Regularly conducted impact assessments — it’s important that these impact assessments are undertaken both before and after the launch of a new product or service
  • Trust and Safety experts included in the design process — as opposed to being brought on after the service “takes off” or a new product is launched. This is the difference between being reactive and proactive
  • AI tools developed by scientists of different backgrounds — or using representative data sets to train models

5. What can tech companies do, if anything, to attract a more diverse workforce?

I’m really optimistic about a more diverse tech workforce after seeing many big companies adopt more flexible working patterns. Expanding paid holidays, sick leave, adoption, and bereavement policies, as Bumble did in 2021, are also fantastic steps towards accommodating brilliant people from underrepresented groups. It’s important to engrain diversity and inclusion in the organisational structure of these companies, for example, having dedicated T&S teams working together all over the world, rather than having ‘hubs’ only located in specific regions. This will mean it’s more likely tech companies won’t be enforcing a purely western view of online safety.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Ethical Consideration in AI Content Moderation : Avoiding Censorship and Biais

Artificial Intelligence has revolutionized various aspects of our lives, including content moderation on online platforms. As the volume of digital content continues to grow exponentially, AI algorithms play a crucial role in filtering and managing this content. However, with great power comes great responsibility, and the ethical considerations surrounding AI content moderation are becoming increasingly…
3 minutes

Live Chat Content Moderation Guide

During any live streaming nowadays, whether it be a content creator on Youtube, an influencer on Instagram, or even live sports in some cases, there's always some sort of live chat. These are public commentary sections where viewers can interact and share their thoughts and opinions, but depending on which event or what sort of…
6 minutes

Content Moderation for Virtual Reality

What is content moderation in virtual reality? Content moderation in virtual reality (VR) is the process of monitoring and managing user-generated content within VR platforms to make sure it meets certain standards and guidelines. This can include text, images, videos, and any actions within the 3D virtual environment. Given the interactive and immersive nature of…
31 minutes

The Impact of Trust and Safety in Marketplaces

Nowadays, its no surprise that an unregulated marketplace with sketchy profiles, violent interactions, scams, and illegal products is doomed to fail. In the current world of online commerce, trust and safety are essential, and if users don't feel comfortable, they won’t buy. As a marketplace owner, ensuring that your platform is a safe and reliable…
9 minutes

How AI is Revolutionizing Content Moderation in Social Media Platforms

Social media platforms have become an integral part of our lives, connecting us with friends, family, and the world at large. Still, with the exponential growth of user-generated content, ensuring a safe and positive user experience has become a daunting task. This is where Artificial Intelligence (AI) comes into play, revolutionizing the way social media…
3 minutes

Customizing AI Content Moderation for Different Industries and Platforms

With the exponential growth of user-generated content across various industries and platforms, the need for effective and tailored content moderation solutions has never been more apparent. Artificial Intelligence (AI) plays a major role in automating content moderation processes, but customization is key to address the unique challenges faced by different industries and platforms. Understanding Industry-Specific…
3 minutes

Emerging Threats in AI Content Moderation : Deep Learning and Contextual Analysis 

With the rise of user-generated content across various platforms, artificial intelligence (AI) has played a crucial role in automating the moderation process. However, as AI algorithms become more sophisticated, emerging threats in content moderation are also on the horizon. This article explores two significant challenges: the use of deep learning and contextual analysis in AI…
4 minutes

The Impact of AI Content Moderation on User Experience and Engagement

User experience and user engagement are two critical metrics that businesses closely monitor to understand how their products, services, or systems are being received by customers. Now that user-generated content (UGC) is on the rise, content moderation plays a main role in ensuring a safe and positive user experience. Artificial intelligence (AI) has emerged as…
4 minutes

Future Technologies : The Next Generation of AI in Content Moderation 

With the exponential growth of user-generated content on various platforms, the task of ensuring a safe and compliant online environment has become increasingly complex. As we look toward the future, emerging technologies, particularly in the field of artificial intelligence (AI), are poised to revolutionize content moderation and usher in a new era of efficiency and…
3 minutes

Global Perspective : How AI Content Moderation Differs Across Cultures and Religion

The internet serves as a vast platform for the exchange of ideas, information, and opinions. However, this free exchange also brings challenges, including the need for content moderation to ensure that online spaces remain safe and respectful. As artificial intelligence (AI) increasingly plays a role in content moderation, it becomes essential to recognize the cultural…
5 minutes

How Predators Are Abusing Generative AI

The recent rise of generative AI has revolutionized various industries, including Trust and Safety. However, this technological advancement generates new problems. Predators have found ways to abuse generative AI, using it to carry out horrible acts such as child sex abuse material (CSAM), disinformation, fraud, and extremism. In this article, we will explore how predators…
4 minutes

What is Content Moderation: a Guide

Content moderation is one of the major aspect of managing online platforms and communities. It englobes the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. In this article, we'll provide you with a comprehensive glossary to understand the key concepts, as well as its definition, challenges and…
15 minutes

‍The Future of Dating: Embracing Video to Connect and Thrive

In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In…
4 minutes

17 Questions Trust and Safety Leaders Should Be Able to Answer 

A Trust and Safety leader plays a crucial role in ensuring the safety and security of a platform or community. Here are 17 important questions that a Trust and Safety leader should be able to answer.  What are the key goals and objectives of the Trust and Safety team? The key goals of the Trust…
6 minutes

How to Launch a Successful Career in Trust and Safety‍

Before diving into the specifics of launching a career in Trust and Safety, it's important to have a clear understanding of what this field entails. Trust and Safety professionals are responsible for maintaining a safe and secure environment for users on digital platforms. This includes identifying and addressing harmful content, developing policies to prevent abuse,…
5 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert