fbpx

Bants or Bullying: The impact of social media on minors

Author: Stephanie Borne, Digital transformation and innovation strategist/ change maker and Inclusivity champion.

Background: Stephanie has worked for the UK’s most respected Organisations on Digital transformation and Digital communities, including ChildLine, The NSPCC, Plan International and Shelter.

With growing concerns over digital safety, and incidents (too often fatal) of exposure to harmful content, how can we make the internet a safer place for all?

October was anti-bullying month in the U.S.; November 14th to the 18th will be Anti-Bullying Week in England and Wales. Clearly, the world is taking bullying and (increasingly) cyberbullying very seriously. Because Cyberbullying affects our children, the world is rallying around making digital safety a reality. But the internet has become a place where we have to watch our steps and be mindful of what we are sharing and who we may interact with, from online dating to gaming, or simply taking part in conversations in open platforms or closed forums. No one is really safe from harm.

What follows is relevant whatever platform or community you are creating and managing, and a reminder that now is the time to take action.

When my daughter turned 13, accessing her phone and checking in became trickier than before. Her new feeling of becoming a teenager and claiming her independence expressed itself in refusing to hand over her phone and claiming her right to privacy. And try to reason with a teenager without getting into an argument…

She would report some strange messages or behaviours that she identified as bullyish. But she asked me to please, please, please, not report it or contact the school, out of fear of embarrassment.

More worryingly, after finally seeing the messages, I struggled to understand what I was reading. Going through her contacts’ profiles was a painful exercise in sorting legitimate profiles of friends, using very entertaining aliases, from the predatory ones. And there were a few.

Even more worryingly, some messages from friends were akin to bullying, but she dismissed those as innocent banter. And I am a parent working in digital. I’ve designed and deployed social media campaigns, worked with all open platforms, I understand how it works.

In 2017, Molly Russell, 14 years old, took her own life after suffering a period of depression. The official report, available online states that “The way that the platforms operated meant that Molly had access to images, video clips and text concerning or concerned with self-harm, suicide or that were otherwise negative or depressing in nature. The platform operated in such a way using algorithms as to result, in some circumstances, of binge periods of images, video clips and text some of which were selected and provided without Molly requesting them.”

Molly’s passing, although more related to harmful content than bullying, is still an extremely sad reminder that our children aren’t safe, that we struggle to keep them safe, that governments aren’t able to agree on regulations that are effective, and that platforms aren’t doing enough or not taking responsibility.

Andrew Walker, the coroner for the Northern District of Greater London, concluded Molly died from an act of self-harm while suffering depression and strongly highlighted the negative effects of online content.

He said the images of self-harm and suicide she viewed “shouldn’t have been available for a child to see”. Social media content contributed “more than minimally” to Molly’s death.

However sad it is that a young girl had to lose her life to see things move in a more reassuring direction, things do seem to be moving…

For example, The National Society for the Prevention of Cruelty to Children (NSPCC) in the U.K. is calling for tech platforms to take responsibility for protecting young people. Prince William says online safety for young people should be “a prerequisite, not an afterthought”.

In a first for both Meta, which owns Instagram, and Pinterest, senior executives had to give evidence under oath in a court in the U.K. Source BBC

And if governments do struggle with the “safeguarding versus freedom of speech” argument, regulations are about to get, if not clearer, but stricter. Like the 2016 EU General Data Protection Regulation (GDPR), that came about after the suicide of Olive Cooke, a 92 year old pensioner overwhelmed by the 3000 donations requests she would receive in a year, the new regulations may take a while to be implemented but they will come with the serious risk of fines that could be as high as 8 to 10% of a business’s turnover.

All this to say that we can only hope for change.

The question is, who should and who will take preventive action?

So who’s responsible for helping tackle cyberbullying? Children?

Children and teens are very resourceful. They know about bullying and try to help each other. As digital natives, there is very little difference to them between the real and virtual world. They adopt platforms and apps and navigate between one or the other seamlessly. They know there is the potential for harm, but can’t be expected to spot signs of abuse and are even less likely to be in control of online interactions.

Data around digital safety and cyberbullying is sparse or at best inconsistent. But we all know how damaging it can be. Not every child can talk to their parents; not every parent is digitally savvy enough to understand.

They are children, young people, and yes, need educating, but can’t be held responsible if the world they inherit is unsafe.

Parents and educators?

We all know too well how difficult it is to keep up with the digital world. We as adults give away our data and accept that platforms will serve us content that we haven’t asked for But most importantly, we know that reasoning with a teenager is not such an easy task.

Simply put, parents, regulators and educators struggle to keep up.

While platforms claim to deploy ways of keeping users safe, the onus seems to remain on users to understand how to protect themselves. Teachers are expected to educate young people in digital citizenship; parents are expected to do the same while at the same time monitoring their child’s activity.

Of course coining terms such as Digital citizenship “the ability to safely and responsibly access digital technologies, as well as being an active and respectful member of society, both online and offline”. Source FutureLearn, is helpful.

The term and concept is becoming more widely spread and shared.

But for educators, it comes on top of existing curriculums and means yet another topic to teach, understand and master in the first place. So there is still a long way to go.

The platforms?

If you search for what platforms are doing to prevent bullying or harmful content, they happily present a mix of human and tech measures. Yet, examples of distressing content flourish, with extremely serious and even fatal consequences on users’ mental health and lives.

When Adam Mosseri joined as Head of Instagram in 2019 he announced his intention to get serious about safety on the platform, “We are in a pivotal moment,” “We want to lead the industry in this fight.” Source: Time. New features were tested but we’re yet to see any real progress.

Facebook and Instagram’s own research showed that use of the platform made body issues worse for girls. Yet they will not accept the overall impact of their product on the mental health of the youth. Source / TheGuardian

Systematically, platforms revert back to putting the responsibility on users to check privacy and safety features to protect themselves.

The financial interest of platforms plays a big role in not pushing the safety agenda so much, or it encourages them to make compromises and focus on the financial gains. They are businesses after all and have targets to meet.

But this doesn’t excuse not putting everything in place to protect the communities they help create and facilitate. Gone are the days where one could expect communities to self-regulate. And when it comes to young people, and despite their amazing peer-to-peer support ethos, they should expect to be kept safe from harm, to enjoy and benefit from what these communities can offer.

So if we are managing an online community, the arguments that good moderation practices will help keep our brands safe from reputational harm, or from the risk of being fined or worse for not meeting regulatory requirements should already be motivation enough to review moderation tools and processes. But by facilitating engagement, the creation and sharing of content, we are ultimately responsible for our users and moderators safety and this should be a given.

In short, the internet will only be a safer place if everyone takes responsibility and acts upon harmful content and behaviours within our remit. Governments need to take action, parents need to educate and platforms need to be implementing the safest possible solutions. Apologising isn’t enough.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Podcast Moderation at Scale: Leveraging AI to Manage Content

The podcasting industry has experienced an explosive growth in recent years, with millions of episodes being published across various platforms every day. As the volume of audio content surges, ensuring a safe and trustworthy podcast environment becomes a paramount concern. Podcast moderation plays a crucial role in filtering and managing podcast episodes to prevent the…
4 minutes

Content Moderators : How to protect their Mental Health ? 

Content moderation has become an essential aspect of managing online platforms and ensuring a safe user experience. Behind the scenes, content moderators play a crucial role in reviewing user-generated content, filtering out harmful or inappropriate materials, and upholding community guidelines. However, the task of content moderation is not without its challenges, as it exposes moderators…
4 minutes

Text Moderation: Scale your content moderation with AI

In today's interconnected world, text-based communication has become a fundamental part of our daily lives. However, with the exponential growth of user-generated text content on digital platforms, ensuring a safe and inclusive online environment has become a daunting task. Text moderation plays a critical role in filtering and managing user-generated content to prevent harmful or…
4 minutes

Audio Moderation: AI-Driven Strategies to Combat Online Threats

In today's digitally-driven world, audio content has become an integral part of online platforms, ranging from podcasts and audiobooks to user-generated audio clips on social media. With the increasing volume of audio content being generated daily, audio moderation has become a critical aspect of maintaining a safe and positive user experience. Audio moderation involves systematically…
4 minutes

What is Content Moderation ? 

Content moderation is the strategic process of evaluating, filtering, and regulating user-generated content on digital ecosystems. It plays a crucial role in fostering a safe and positive user experience by removing or restricting content that violates community guidelines, is harmful, or could offend users. An effective moderation system is designed to strike a delicate balance…
5 minutes

The Evolution of Content Moderation Rules Throughout The Years

The birth of the digital public sphere This article is contributed by Ahmed Medien. Online forums and social marketplaces have become a large part of the internet in the past 20 years since the early bulletin boards on the internet and AOL chat rooms. Today, users moved primarily to social platforms, platforms that host user-generated content. These…
7 minutes

Video Moderation : It’s Scale or Fail with AI

In the digital age, video content has become a driving force across online platforms, shaping the way we communicate, entertain, and share experiences. With this exponential growth, content moderation has become a critical aspect of maintaining a safe and inclusive online environment. The sheer volume of user-generated videos poses significant challenges for platforms, necessitating advanced…
4 minutes

AI Ethics Expert’s Corner : Kyle Dent, Head of AI Ethics

This month we’ve added a new “Expert’s Corner” feature starting with an interview with our own Kyle Dent, who recently joined Checkstep. He answers questions about AI ethics and some of the challenges of content moderation. AI Ethics FAQ with Kyle Dent If you would like to catch up on other thought leadership pieces by…
4 minutes

Misinformation Expert’s Corner : Preslav Nakov, AI and Fake News

Preslav Nakov has established himself as one of the leading experts on the use of AI against propaganda and disinformation. He has been very influential in the field of natural language processing and text mining, publishing hundreds of peer reviewed research papers. He spoke to us about his work dealing with the ongoing problem of…
8 minutes

Checkstep Raises $1.8M Seed Funding to Combat Online Toxicity

Early stage startup gets funding for R&D effort to develop advanced content moderation technology We’re thrilled to announce that Checkstep recently closed a $1.8m seed funding round to further develop our advanced AI product offering contextual content moderation. The round was carefully selected to be diverse, international, and with a significant added value to our business. Influential personalities…
3 minutes

Expert’s Corner with Checkstep CEO Guillaume Bouchard

This month’s expert is Checkstep’s CEO and Co-Founder Guillaume Bouchard. After exiting his previous company, Bloomsbury AI to Facebook, he’s on a mission to better prepare online platforms against all types of online harm. He has a PhD in applied mathematics and machine learning from INRIA, France. 12 years of scientific research experience at Xerox…
3 minutes

Expert’s Corner with Community Building Expert Todd Nilson

Checkstep interviews expert in online community building Todd Nilson leads transformational technology projects for major brands and organizations. He specializes in online communities, digital workplaces, social listening analysis, competitive intelligence, game thinking, employer branding, and virtual collaboration. Todd has managed teams and engagements with national and global consultancy firms specialized in online communities and the…
7 minutes

Blowing the Whistle on Facebook

Wondering what all the fuss is around the Facebook Papers? Get the lowdown here. A large trove of recently leaked documents from Meta/Facebook promises to keep the social platform in the news, and in hot water, for some time to come. While other recent “Paper” investigations (think Panama and Paradise) have revealed fraud, tax evasion,…
7 minutes

Expert’s Corner with Head of Research Isabelle Augenstein

This month we were very happy to sit down with one of the brains behind Checkstep who is also a recognized talent among European academics. She is the co-head of research at Checkstep and also an associate professor at the University of Copenhagen. She currently holds a prestigious DFF Sapere Aude Research Leader fellowship on ‘Learning to…
5 minutes

Ready or Not, AI Is Coming to Content Moderation

As digital platforms and online communities continue to grow, content moderation becomes increasingly critical to ensure safe and positive user experiences. Manual content moderation by human moderators is effective but often falls short when dealing with the scale and complexity of user-generated content. Ready or not, AI is coming to content moderation operations, revolutionizing the…
5 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert