Bants or Bullying: The impact of social media on minors

Author: Stephanie Borne, Digital transformation and innovation strategist/ change maker and Inclusivity champion.

Background: Stephanie has worked for the UK’s most respected Organisations on Digital transformation and Digital communities, including ChildLine, The NSPCC, Plan International and Shelter.

With growing concerns over digital safety, and incidents (too often fatal) of exposure to harmful content, how can we make the internet a safer place for all?

October was anti-bullying month in the U.S.; November 14th to the 18th will be Anti-Bullying Week in England and Wales. Clearly, the world is taking bullying and (increasingly) cyberbullying very seriously. Because Cyberbullying affects our children, the world is rallying around making digital safety a reality. But the internet has become a place where we have to watch our steps and be mindful of what we are sharing and who we may interact with, from online dating to gaming, or simply taking part in conversations in open platforms or closed forums. No one is really safe from harm.

What follows is relevant whatever platform or community you are creating and managing, and a reminder that now is the time to take action.

When my daughter turned 13, accessing her phone and checking in became trickier than before. Her new feeling of becoming a teenager and claiming her independence expressed itself in refusing to hand over her phone and claiming her right to privacy. And try to reason with a teenager without getting into an argument…

She would report some strange messages or behaviours that she identified as bullyish. But she asked me to please, please, please, not report it or contact the school, out of fear of embarrassment.

More worryingly, after finally seeing the messages, I struggled to understand what I was reading. Going through her contacts’ profiles was a painful exercise in sorting legitimate profiles of friends, using very entertaining aliases, from the predatory ones. And there were a few.

Even more worryingly, some messages from friends were akin to bullying, but she dismissed those as innocent banter. And I am a parent working in digital. I’ve designed and deployed social media campaigns, worked with all open platforms, I understand how it works.

In 2017, Molly Russell, 14 years old, took her own life after suffering a period of depression. The official report, available online states that “The way that the platforms operated meant that Molly had access to images, video clips and text concerning or concerned with self-harm, suicide or that were otherwise negative or depressing in nature. The platform operated in such a way using algorithms as to result, in some circumstances, of binge periods of images, video clips and text some of which were selected and provided without Molly requesting them.”

Molly’s passing, although more related to harmful content than bullying, is still an extremely sad reminder that our children aren’t safe, that we struggle to keep them safe, that governments aren’t able to agree on regulations that are effective, and that platforms aren’t doing enough or not taking responsibility.

Andrew Walker, the coroner for the Northern District of Greater London, concluded Molly died from an act of self-harm while suffering depression and strongly highlighted the negative effects of online content.

He said the images of self-harm and suicide she viewed “shouldn’t have been available for a child to see”. Social media content contributed “more than minimally” to Molly’s death.

However sad it is that a young girl had to lose her life to see things move in a more reassuring direction, things do seem to be moving…

For example, The National Society for the Prevention of Cruelty to Children (NSPCC) in the U.K. is calling for tech platforms to take responsibility for protecting young people. Prince William says online safety for young people should be “a prerequisite, not an afterthought”.

In a first for both Meta, which owns Instagram, and Pinterest, senior executives had to give evidence under oath in a court in the U.K. Source BBC

And if governments do struggle with the “safeguarding versus freedom of speech” argument, regulations are about to get, if not clearer, but stricter. Like the 2016 EU General Data Protection Regulation (GDPR), that came about after the suicide of Olive Cooke, a 92 year old pensioner overwhelmed by the 3000 donations requests she would receive in a year, the new regulations may take a while to be implemented but they will come with the serious risk of fines that could be as high as 8 to 10% of a business’s turnover.

All this to say that we can only hope for change.

The question is, who should and who will take preventive action?

So who’s responsible for helping tackle cyberbullying? Children?

Children and teens are very resourceful. They know about bullying and try to help each other. As digital natives, there is very little difference to them between the real and virtual world. They adopt platforms and apps and navigate between one or the other seamlessly. They know there is the potential for harm, but can’t be expected to spot signs of abuse and are even less likely to be in control of online interactions.

Data around digital safety and cyberbullying is sparse or at best inconsistent. But we all know how damaging it can be. Not every child can talk to their parents; not every parent is digitally savvy enough to understand.

They are children, young people, and yes, need educating, but can’t be held responsible if the world they inherit is unsafe.

Parents and educators?

We all know too well how difficult it is to keep up with the digital world. We as adults give away our data and accept that platforms will serve us content that we haven’t asked for But most importantly, we know that reasoning with a teenager is not such an easy task.

Simply put, parents, regulators and educators struggle to keep up.

While platforms claim to deploy ways of keeping users safe, the onus seems to remain on users to understand how to protect themselves. Teachers are expected to educate young people in digital citizenship; parents are expected to do the same while at the same time monitoring their child’s activity.

Of course coining terms such as Digital citizenship “the ability to safely and responsibly access digital technologies, as well as being an active and respectful member of society, both online and offline”. Source FutureLearn, is helpful.

The term and concept is becoming more widely spread and shared.

But for educators, it comes on top of existing curriculums and means yet another topic to teach, understand and master in the first place. So there is still a long way to go.

The platforms?

If you search for what platforms are doing to prevent bullying or harmful content, they happily present a mix of human and tech measures. Yet, examples of distressing content flourish, with extremely serious and even fatal consequences on users’ mental health and lives.

When Adam Mosseri joined as Head of Instagram in 2019 he announced his intention to get serious about safety on the platform, “We are in a pivotal moment,” “We want to lead the industry in this fight.” Source: Time. New features were tested but we’re yet to see any real progress.

Facebook and Instagram’s own research showed that use of the platform made body issues worse for girls. Yet they will not accept the overall impact of their product on the mental health of the youth. Source / TheGuardian

Systematically, platforms revert back to putting the responsibility on users to check privacy and safety features to protect themselves.

The financial interest of platforms plays a big role in not pushing the safety agenda so much, or it encourages them to make compromises and focus on the financial gains. They are businesses after all and have targets to meet.

But this doesn’t excuse not putting everything in place to protect the communities they help create and facilitate. Gone are the days where one could expect communities to self-regulate. And when it comes to young people, and despite their amazing peer-to-peer support ethos, they should expect to be kept safe from harm, to enjoy and benefit from what these communities can offer.

So if we are managing an online community, the arguments that good moderation practices will help keep our brands safe from reputational harm, or from the risk of being fined or worse for not meeting regulatory requirements should already be motivation enough to review moderation tools and processes. But by facilitating engagement, the creation and sharing of content, we are ultimately responsible for our users and moderators safety and this should be a given.

In short, the internet will only be a safer place if everyone takes responsibility and acts upon harmful content and behaviours within our remit. Governments need to take action, parents need to educate and platforms need to be implementing the safest possible solutions. Apologising isn’t enough.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

A Guide to Detect Fake User Accounts

Online social media platforms have become an major part of our daily lives: with the ability to send messages, share files, and connect with others, these networks provide a way, for us users, to stay connected. Those platforms are dealing with a rise of fake accounts and online fraudster making maintaining the security of their…
4 minutes

How to Keep your Online Community Abuse-Free

The Internet & Community Building In the past, if you were really into something niche, finding others who shared your passion in your local area was tough. You might have felt like you were the only one around who had that particular interest. But things have changed a lot since then. Now, thanks to the…
6 minutes

Blowing the Whistle on Facebook

Wondering what all the fuss is around the Facebook Papers? Get the lowdown here. A large trove of recently leaked documents from Meta/Facebook promises to keep the social platform in the news, and in hot water, for some time to come. While other recent “Paper” investigations (think Panama and Paradise) have revealed fraud, tax evasion,…
7 minutes

Designing for Trust in 2023: How to Create User-Friendly Designs that Keep Users Safe

The Significance of designing for trust in the Digital World In today's digital landscape, building trust with users is essential for operating a business online. Trust is the foundation of successful user interactions and transactions, it is key to encouraging users to share personal information, make purchases, and interact with website content. Without trust, users…
5 minutes

Building Safer Dating Platforms with AI Moderation

It is not that long since online dating was considered taboo, something that people rarely talked about. But in the last decade it has changed hugely.  Now the global dating scene is clearly flourishing. Over 380 million people worldwide are estimated to use dating platforms, and it is an industry with an annualised revenue in…
4 minutes

Ensuring Child Safety Online: The Role of Trust & Safety Teams

Children are now growing up with technology as an integral part of their lives. With the increase of smartphones, tablets, and internet-connected devices, it is important for parents, educators, and technology companies to prioritize children's online safety. This shared responsibility requires collaboration, best practices, and strategies to create a secure and user-friendly virtual environment. By…
5 minutes

9 Industries Benefiting from AI Content Moderation

As the internet becomes an integral part of people's lives, industries have moved towards having a larger online presence. Many businesses in these industries have developed online platforms where user-generated content (UGC) plays a major role. From the rise of online healthcare to the invention of e-learning, all of these promote interaction between parties through…
8 minutes

Expert’s Corner with Community Building Expert Todd Nilson

Checkstep interviews expert in online community building Todd Nilson leads transformational technology projects for major brands and organizations. He specializes in online communities, digital workplaces, social listening analysis, competitive intelligence, game thinking, employer branding, and virtual collaboration. Todd has managed teams and engagements with national and global consultancy firms specialized in online communities and the…
7 minutes

Ethical Consideration in AI Content Moderation : Avoiding Censorship and Biais

Artificial Intelligence has revolutionized various aspects of our lives, including content moderation on online platforms. As the volume of digital content continues to grow exponentially, AI algorithms play a crucial role in filtering and managing this content. However, with great power comes great responsibility, and the ethical considerations surrounding AI content moderation are becoming increasingly…
3 minutes

The Future of AI-Powered Content Moderation: Careers and Opportunities

As companies are grappling with the challenge of ensuring user safety and creating a welcoming environment: AI-powered content moderation has emerged as a powerful solution, revolutionizing the way organizations approach this task. In this article, we will explore the careers and opportunities that AI-powered content moderation presents, and how individuals and businesses can adapt to…
6 minutes

Why moderation has become essential for UGC 

User-Generated Content (UGC) has become an integral part of online participation. Any type of material—whether it's text, photos, videos, reviews, or discussions—that is made and shared by people instead of brands or official content providers is called user-generated content. Representing variety and honesty, it is the online community's collective voice. Let's explore user-generated content (UGC)…
6 minutes

Expert’s Corner with Checkstep CEO Guillaume Bouchard

This month’s expert is Checkstep’s CEO and Co-Founder Guillaume Bouchard. After exiting his previous company, Bloomsbury AI to Facebook, he’s on a mission to better prepare online platforms against all types of online harm. He has a PhD in applied mathematics and machine learning from INRIA, France. 12 years of scientific research experience at Xerox…
3 minutes

Global Perspective : How AI Content Moderation Differs Across Cultures and Religion

The internet serves as a vast platform for the exchange of ideas, information, and opinions. However, this free exchange also brings challenges, including the need for content moderation to ensure that online spaces remain safe and respectful. As artificial intelligence (AI) increasingly plays a role in content moderation, it becomes essential to recognize the cultural…
5 minutes

The Psychology Behind AI Content Moderation: Understanding User Behavior

Social media platforms are experiencing exponential growth, with billions of users actively engaging in content creation and sharing. As the volume of user-generated content continues to rise, the challenge of content moderation becomes increasingly complex. To address this challenge, artificial intelligence (AI) has emerged as a powerful tool for automating the moderation process. However, user…
5 minutes

3 Facts you Need to Know about Content Moderation and Dating Going into 2024

What is Content Moderation? Content moderation is the practice of monitoring and managing user-generated content on digital platforms to ensure it complies with community guidelines, legal standards, and ethical norms. This process aims to create a safe and inclusive online environment by preventing the spread of harmful, offensive, or inappropriate content. The rise of social…
6 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert