Our featured expert this month is someone who has been working hard to keep people safe online.
Manpreet Singh is a Trust and Safety professional working in London, UK. She has developed her interest in Child Online Safety and Tech Policy through a myriad of experiences, including working as a policy researcher at a leading children’s charity in London and taking part in a multi-stakeholder project examining children and age assurance in the digital age. Her interests include safety by design, data protection, diverse and inclusive content moderation, and reducing online harms. The answers below are not intended to represent any current or past institutional affiliations.
- The pandemic definitely put a stop to outdoor activities that children would likely partake in, thereby re-shifting their focus to online activities such as gaming or participating in various TikTok challenges. Some of these challenges can be particularly dangerous, e.g. Milk Crate Challenge. This is something platforms can’t anticipate or prepare themselves against. With such emerging trends, how can platforms better prepare themselves?
Childhood is increasingly taking place online, and many experts have asserted that the pandemic has accelerated this change. Participating in online activities such as gaming and social media have helped many children cope with frightening and sudden changes to their daily routine, but as you have pointed out, some of these activities can be dangerous.
However, dangerous challenges (or challenges in general) are not a unique feature of the digital world — many ‘offline’ challenges children partake in can be risky, even ones as innocuous as the age-old ding dong ditch. That being said, I do believe that online platforms can adopt safety-by-design approaches to help protect their users. There are a few underlying factors which can make online challenges very risky for children in particular: the fact that children may not be aware of the risks and consequences of participating, the low barriers/friction to participate, and importantly the incentives to participate due to engagement-based ranking, which are unique to online challenges.
Recently, TikTok announced an expansion to their efforts to combat dangerous challenges, introducing measures such as a dedicated policy category within its Community Guidelines, a 4-step campaign to encourage users to “stop, think, decide, act”, and a reporting menu so users can report problematic challenges. This is a good example of how platforms can help combat children not understanding the consequences of participating in a viral challenge.
I am more concerned with the second and third factors I’ve identified. The viral nature of these online challenges are in part due to how easy it is for a child to watch a video of someone else attempting it, and then immediately hit the record button. The Milk Crate Challenge in particular may have some barriers to entry — it’s not likely that a child will have access to that many milk crates immediately — but other dangerous challenges circulating have virtually no time or elaborate prop requirements. Moreover, collaborative features provided by a platform, such as the ‘duet’ on TikTok, make it even easier and tempting for a child to record their own attempts. This is another area where safety-by-design can mitigate risks. Again, I’ll point to some changes TikTok have made to help combat how easy it is to react to and take part in challenges– TikTok users between the ages of 13 and 15 do not have access to such collaborative features, and users 16 and 17 will only be able to ‘stitch’ their videos with Friends by default. These default settings can raise the ‘barrier of entry’ to these challenges just enough so that children may decide it is simply not worth the hassle of configuring settings just to share their own attempt more widely.
Anyone who has been keeping an eye on digital/tech news over the past few months will not be a stranger to the phrase ‘engagement-based ranking’, and for good reason: it plays a huge role in understanding why children continue to participate in these trending but dangerous challenges. These online challenges, accompanied with hashtags, can garner children hundreds or thousands of likes. This engagement — which in many ways is viewed by children as a tracer of popularity or social standing — can often incentivise them to take actions that they might otherwise decide against. Unfortunately, I don’t have a concrete answer for how platforms can mitigate this risk without re-thinking the role of engagement-based ranking, but I hope that platforms are inspired to explore other methods, such as Twitter allowing users to revert back to a reverse-chronological timeline.
2. As upcoming legislation is largely a reflection of existing trust and safety practices at Big Tech? Does the introduction of this legislation and resulting burden of implementation stifle innovation or disadvantage challenger social networks?
Regulation is not always a constraint. It can be an opportunity to even the playing field, drive tech companies towards new innovation, or force them to meet the changing expectations and demands users have on the services and platforms they use. The right legislation could lead to the development of innovative products and services; for example, it wouldn’t surprise me if in the next few years we saw a growth in 3rd party apps to help people manage their digital footprint, or browser extensions that help users scan for potentially misleading information. I do see this as an opportunity for newer players to introduce novel approaches and move the field forward.
At the same time, I worry that Big Tech may be better equipped to adapt to changes than existing smaller platforms and startups. It takes a lot of manpower and expertise to have content moderation policies that can adapt to new abuse trends or have dedicated data protection analysts to conduct impact assessments, all while trying to attract new users. One solution is to have different tiers of regulation based on the size of the platform, similar to the EU’s approach in the Digital Services Act, but given that any new startup can go viral overnight, it’s not clear how well this would work in the long run.
I am also seeing many new players entering the field which challenge how we categorise what a ‘social network’ is, for example the audio-only Clubhouse. I would like to see upcoming legislation encompass platforms more broadly, even thinking beyond distinctions of ‘user-generated content’ vs. not.
3. How do we create online spaces or content moderation policies with diverse users in mind? For example, children may require certain extra protections online, LGBTQ users, non-native English speakers, or those with accessibility needs.
This is an extremely important question and one that I am really passionate about. I think the digital world can be brilliant at providing information, community, or a safe space for different groups of people. I turn to stories of Google Maps for helping show wheelchair accessible places, or social media being used as a safer-alternative to dating apps for young people in the LGBTQ community. It’s all the more important that as T&S professionals, we keep platforms safe for everyone.
It can be difficult to factor in, or even be aware of, the concerns of various groups when those creating content moderation policies are themselves, homogenous. This is why it’s crucial that T&S professionals should strive to learn about — or be representative of — diverse groups, so content moderation can be inclusive of many different ideas of safety. For example, it is very easy for platforms to use a definition of image-based abuse or ‘non-consensual intimate images’ (NCII) to mean a nude or otherwise sexually explicit image shared of a user without their consent. But for women who wear a Hijab, an image distributed of them without one without their consent must also be viewed as NCII.
There is no “one size fits all” solution when it comes to trust and safety, and so platforms will need to devote more time and resources towards this area than they may have in the past. But there are general things that will help keep many users safe, for example, more (data) privacy, more options for users to tailor their experiences, and by providing safety tools and resources in many languages.
4. As moderation efforts depend more on AI, is there a risk of negative unintended consequences? We’ve already seen harm against marginalised groups with technology solutions that interfere with free expression. For example, how should platforms balance scalability, accuracy and protecting human rights?
There is absolutely a risk of negative unintended consequences, and I would be nervous to see moderation efforts trend towards prioritising AI-based solutions over a mix of AI and human moderation. AI is a tool, and it can very easily emphasize existing biases we may have. I’m also hesitant about turning to AI for questions about misinformation, especially political misinformation. It’s not clear who users should be looking to to be the authority on what information is accurate, especially when many turn to online platforms to protest and find solidarity against systemic injustices. This definitely can have a concrete impact on free expression.
Some ideas for how platforms can balance scalability, accuracy and upholding human rights is by having:
- Clearly defined content moderation policies — this will embolden users to understand what is and isn’t allowed up front
- Mechanisms for redress for users to challenge moderation decisions — moderation will never be perfect and it’s important that users have the opportunity to speak to an agent when something goes wrong
- Regularly published transparency reports — including metrics for when governments request content to be taken down and opportunities for improvement
- Regularly conducted impact assessments — it’s important that these impact assessments are undertaken both before and after the launch of a new product or service
- Trust and Safety experts included in the design process — as opposed to being brought on after the service “takes off” or a new product is launched. This is the difference between being reactive and proactive
- AI tools developed by scientists of different backgrounds — or using representative data sets to train models
5. What can tech companies do, if anything, to attract a more diverse workforce?
I’m really optimistic about a more diverse tech workforce after seeing many big companies adopt more flexible working patterns. Expanding paid holidays, sick leave, adoption, and bereavement policies, as Bumble did in 2021, are also fantastic steps towards accommodating brilliant people from underrepresented groups. It’s important to engrain diversity and inclusion in the organisational structure of these companies, for example, having dedicated T&S teams working together all over the world, rather than having ‘hubs’ only located in specific regions. This will mean it’s more likely tech companies won’t be enforcing a purely western view of online safety.