fbpx

Bants or Bullying: The impact of social media on minors

Author: Stephanie Borne, Digital transformation and innovation strategist/ change maker and Inclusivity champion.

Background: Stephanie has worked for the UK’s most respected Organisations on Digital transformation and Digital communities, including ChildLine, The NSPCC, Plan International and Shelter.

With growing concerns over digital safety, and incidents (too often fatal) of exposure to harmful content, how can we make the internet a safer place for all?

October was anti-bullying month in the U.S.; November 14th to the 18th will be Anti-Bullying Week in England and Wales. Clearly, the world is taking bullying and (increasingly) cyberbullying very seriously. Because Cyberbullying affects our children, the world is rallying around making digital safety a reality. But the internet has become a place where we have to watch our steps and be mindful of what we are sharing and who we may interact with, from online dating to gaming, or simply taking part in conversations in open platforms or closed forums. No one is really safe from harm.

What follows is relevant whatever platform or community you are creating and managing, and a reminder that now is the time to take action.

When my daughter turned 13, accessing her phone and checking in became trickier than before. Her new feeling of becoming a teenager and claiming her independence expressed itself in refusing to hand over her phone and claiming her right to privacy. And try to reason with a teenager without getting into an argument…

She would report some strange messages or behaviours that she identified as bullyish. But she asked me to please, please, please, not report it or contact the school, out of fear of embarrassment.

More worryingly, after finally seeing the messages, I struggled to understand what I was reading. Going through her contacts’ profiles was a painful exercise in sorting legitimate profiles of friends, using very entertaining aliases, from the predatory ones. And there were a few.

Even more worryingly, some messages from friends were akin to bullying, but she dismissed those as innocent banter. And I am a parent working in digital. I’ve designed and deployed social media campaigns, worked with all open platforms, I understand how it works.

In 2017, Molly Russell, 14 years old, took her own life after suffering a period of depression. The official report, available online states that “The way that the platforms operated meant that Molly had access to images, video clips and text concerning or concerned with self-harm, suicide or that were otherwise negative or depressing in nature. The platform operated in such a way using algorithms as to result, in some circumstances, of binge periods of images, video clips and text some of which were selected and provided without Molly requesting them.”

Molly’s passing, although more related to harmful content than bullying, is still an extremely sad reminder that our children aren’t safe, that we struggle to keep them safe, that governments aren’t able to agree on regulations that are effective, and that platforms aren’t doing enough or not taking responsibility.

Andrew Walker, the coroner for the Northern District of Greater London, concluded Molly died from an act of self-harm while suffering depression and strongly highlighted the negative effects of online content.

He said the images of self-harm and suicide she viewed “shouldn’t have been available for a child to see”. Social media content contributed “more than minimally” to Molly’s death.

However sad it is that a young girl had to lose her life to see things move in a more reassuring direction, things do seem to be moving…

For example, The National Society for the Prevention of Cruelty to Children (NSPCC) in the U.K. is calling for tech platforms to take responsibility for protecting young people. Prince William says online safety for young people should be “a prerequisite, not an afterthought”.

In a first for both Meta, which owns Instagram, and Pinterest, senior executives had to give evidence under oath in a court in the U.K. Source BBC

And if governments do struggle with the “safeguarding versus freedom of speech” argument, regulations are about to get, if not clearer, but stricter. Like the 2016 EU General Data Protection Regulation (GDPR), that came about after the suicide of Olive Cooke, a 92 year old pensioner overwhelmed by the 3000 donations requests she would receive in a year, the new regulations may take a while to be implemented but they will come with the serious risk of fines that could be as high as 8 to 10% of a business’s turnover.

All this to say that we can only hope for change.

The question is, who should and who will take preventive action?

So who’s responsible for helping tackle cyberbullying? Children?

Children and teens are very resourceful. They know about bullying and try to help each other. As digital natives, there is very little difference to them between the real and virtual world. They adopt platforms and apps and navigate between one or the other seamlessly. They know there is the potential for harm, but can’t be expected to spot signs of abuse and are even less likely to be in control of online interactions.

Data around digital safety and cyberbullying is sparse or at best inconsistent. But we all know how damaging it can be. Not every child can talk to their parents; not every parent is digitally savvy enough to understand.

They are children, young people, and yes, need educating, but can’t be held responsible if the world they inherit is unsafe.

Parents and educators?

We all know too well how difficult it is to keep up with the digital world. We as adults give away our data and accept that platforms will serve us content that we haven’t asked for But most importantly, we know that reasoning with a teenager is not such an easy task.

Simply put, parents, regulators and educators struggle to keep up.

While platforms claim to deploy ways of keeping users safe, the onus seems to remain on users to understand how to protect themselves. Teachers are expected to educate young people in digital citizenship; parents are expected to do the same while at the same time monitoring their child’s activity.

Of course coining terms such as Digital citizenship “the ability to safely and responsibly access digital technologies, as well as being an active and respectful member of society, both online and offline”. Source FutureLearn, is helpful.

The term and concept is becoming more widely spread and shared.

But for educators, it comes on top of existing curriculums and means yet another topic to teach, understand and master in the first place. So there is still a long way to go.

The platforms?

If you search for what platforms are doing to prevent bullying or harmful content, they happily present a mix of human and tech measures. Yet, examples of distressing content flourish, with extremely serious and even fatal consequences on users’ mental health and lives.

When Adam Mosseri joined as Head of Instagram in 2019 he announced his intention to get serious about safety on the platform, “We are in a pivotal moment,” “We want to lead the industry in this fight.” Source: Time. New features were tested but we’re yet to see any real progress.

Facebook and Instagram’s own research showed that use of the platform made body issues worse for girls. Yet they will not accept the overall impact of their product on the mental health of the youth. Source / TheGuardian

Systematically, platforms revert back to putting the responsibility on users to check privacy and safety features to protect themselves.

The financial interest of platforms plays a big role in not pushing the safety agenda so much, or it encourages them to make compromises and focus on the financial gains. They are businesses after all and have targets to meet.

But this doesn’t excuse not putting everything in place to protect the communities they help create and facilitate. Gone are the days where one could expect communities to self-regulate. And when it comes to young people, and despite their amazing peer-to-peer support ethos, they should expect to be kept safe from harm, to enjoy and benefit from what these communities can offer.

So if we are managing an online community, the arguments that good moderation practices will help keep our brands safe from reputational harm, or from the risk of being fined or worse for not meeting regulatory requirements should already be motivation enough to review moderation tools and processes. But by facilitating engagement, the creation and sharing of content, we are ultimately responsible for our users and moderators safety and this should be a given.

In short, the internet will only be a safer place if everyone takes responsibility and acts upon harmful content and behaviours within our remit. Governments need to take action, parents need to educate and platforms need to be implementing the safest possible solutions. Apologising isn’t enough.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

The Role of a Content Moderator: Ensuring Safety and Integrity in the Digital World

In today's digital world, the role of a content moderator is central to ensuring the safety and integrity of online platforms. Content moderators are responsible for reviewing and moderating user-generated content to ensure that it complies with the platform's policies and guidelines, and the laws and regulations. Their work is crucial in creating a safe…
5 minutes

Navigating Relationships: Why Content Moderation Plays a Critical Role in Modern Dating

Since the invention of dating websites in 1995, the way potential partners meet and form relationships has changed completely. However, with this convenience comes the challenge of ensuring a safe and positive user experience, which becomes increasingly tedious and time-consuming as more users enter the platform. This is where AI content moderation comes in handy,…
4 minutes

Audio Moderation: AI-Driven Strategies to Combat Online Threats

In today's digitally-driven world, audio content has become an integral part of online platforms, ranging from podcasts and audiobooks to user-generated audio clips on social media. With the increasing volume of audio content being generated daily, audio moderation has become a critical aspect of maintaining a safe and positive user experience. Audio moderation involves systematically…
4 minutes

The Effects of Unregulated Content for Gen Z

The Internet as an Irreplaceable Tool Gen Z’s are the first generation to be born in a world where the internet plays an irreplaceable role, and in some way, these children and adolescents are not just consumers but have become inhabitants of the digital society. Apart from school, generation Z spends most of their time…
5 minutes

Unmasking Fake Dating Sites: How to Spot and Avoid Scams

In today's digital age, online dating has become increasingly popular, especially with the COVID-19 pandemic limiting traditional in-person interactions. Unfortunately, scammers have taken advantage of this trend, creating fake dating sites to exploit vulnerable individuals. These fraudulent platforms not only deceive users but also put their personal information and finances at risk. In this article,…
5 minutes

How to Build a Trustworthy E-Commerce Brand Using AI-text Moderation

In the fast-paced and competitive world of online commerce, trust is the most important element in ensuring successful transactions, and customer evaluations hold a top spot in the ranking of factors that contribute to the development of brand reliability. They act as a kind of digital word-of-mouth, influencing consumers' choices to make purchases and moulding…
4 minutes

Video Moderation : It’s Scale or Fail with AI

In the digital age, video content has become a driving force across online platforms, shaping the way we communicate, entertain, and share experiences. With this exponential growth, content moderation has become a critical aspect of maintaining a safe and inclusive online environment. The sheer volume of user-generated videos poses significant challenges for platforms, necessitating advanced…
4 minutes

From Trolls to Fair Play: The Transformative Impact of AI Moderation in Gaming

The Online Battlefield The online gaming community, once a haven for enthusiasts to connect and share their passion, has faced the growing challenge of toxic behaviour and harassment. Teenagers and young adults are still the main demographic of players, and as multiplayer games became more popular, so did instances of trolling, hate speech, and other…
4 minutes

What is Doxxing: A Comprehensive Guide to Protecting Your Online Privacy

Today, protecting our online privacy has become increasingly important. One of the most concerning threats we face is doxxing. Derived from the phrase "dropping documents," doxxing refers to the act of collecting and exposing an individual's private information, with the intention of shaming, embarrassing, or even endangering them. This malicious practice has gained traction in…
7 minutes

How to Build a Safe Social Media Platform without Sacrificing the User’s Freedom

It was once unthinkable that social media would become an integral aspect of daily life, but here we are, relying on it for communication, information, entertainment, and even shaping our social interactions. It’s brought to our lives a whole new set of rules, and now that online duality is expected, the balance between safety and…
6 minutes

Podcast Moderation at Scale: Leveraging AI to Manage Content

The podcasting industry has experienced an explosive growth in recent years, with millions of episodes being published across various platforms every day. As the volume of audio content surges, ensuring a safe and trustworthy podcast environment becomes a paramount concern. Podcast moderation plays a crucial role in filtering and managing podcast episodes to prevent the…
4 minutes

The Evolution of Online Communication: Cultivating Safe and Respectful Interactions

What was once an outrageous dream is now a mundane reality. Going from in-person communication to being able to hold a conversation from thousands of kilometres away has been nothing short of revolutionary. From the invention of email to the meteoric rise of social media and video conferencing, the ways we connect, share, and interact…
5 minutes

How to Launch a Successful Career in Trust and Safety‍

Before diving into the specifics of launching a career in Trust and Safety, it's important to have a clear understanding of what this field entails. Trust and Safety professionals are responsible for maintaining a safe and secure environment for users on digital platforms. This includes identifying and addressing harmful content, developing policies to prevent abuse,…
5 minutes

TikTok DSA Statement of Reasons (SOR) Statistics

What can we learn from TikTok Statements of Reasons? Body shaming, hypersexualisation, the spread of fake news and misinformation, and the glorification of violence are a high risk on any kind of Social Network. TikTok is one of the fastest growing between 2020 and 2023 and has million of content uploaded everyday on its platform.…
10 minutes

Ready or Not, AI Is Coming to Content Moderation

As digital platforms and online communities continue to grow, content moderation becomes increasingly critical to ensure safe and positive user experiences. Manual content moderation by human moderators is effective but often falls short when dealing with the scale and complexity of user-generated content. Ready or not, AI is coming to content moderation operations, revolutionizing the…
5 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert