Author: Stephanie Borne, Digital transformation and innovation strategist/ change maker and Inclusivity champion.
Background: Stephanie has worked for the UK’s most respected Organisations on Digital transformation and Digital communities, including ChildLine, The NSPCC, Plan International and Shelter.
With growing concerns over digital safety, and incidents (too often fatal) of exposure to harmful content, how can we make the internet a safer place for all?
October was anti-bullying month in the U.S.; November 14th to the 18th will be Anti-Bullying Week in England and Wales. Clearly, the world is taking bullying and (increasingly) cyberbullying very seriously. Because Cyberbullying affects our children, the world is rallying around making digital safety a reality. But the internet has become a place where we have to watch our steps and be mindful of what we are sharing and who we may interact with, from online dating to gaming, or simply taking part in conversations in open platforms or closed forums. No one is really safe from harm.
What follows is relevant whatever platform or community you are creating and managing, and a reminder that now is the time to take action.
When my daughter turned 13, accessing her phone and checking in became trickier than before. Her new feeling of becoming a teenager and claiming her independence expressed itself in refusing to hand over her phone and claiming her right to privacy. And try to reason with a teenager without getting into an argument…
She would report some strange messages or behaviours that she identified as bullyish. But she asked me to please, please, please, not report it or contact the school, out of fear of embarrassment.
More worryingly, after finally seeing the messages, I struggled to understand what I was reading. Going through her contacts’ profiles was a painful exercise in sorting legitimate profiles of friends, using very entertaining aliases, from the predatory ones. And there were a few.
Even more worryingly, some messages from friends were akin to bullying, but she dismissed those as innocent banter. And I am a parent working in digital. I’ve designed and deployed social media campaigns, worked with all open platforms, I understand how it works.
In 2017, Molly Russell, 14 years old, took her own life after suffering a period of depression. The official report, available online states that “The way that the platforms operated meant that Molly had access to images, video clips and text concerning or concerned with self-harm, suicide or that were otherwise negative or depressing in nature. The platform operated in such a way using algorithms as to result, in some circumstances, of binge periods of images, video clips and text some of which were selected and provided without Molly requesting them.”
Molly’s passing, although more related to harmful content than bullying, is still an extremely sad reminder that our children aren’t safe, that we struggle to keep them safe, that governments aren’t able to agree on regulations that are effective, and that platforms aren’t doing enough or not taking responsibility.
Andrew Walker, the coroner for the Northern District of Greater London, concluded Molly died from an act of self-harm while suffering depression and strongly highlighted the negative effects of online content.
He said the images of self-harm and suicide she viewed “shouldn’t have been available for a child to see”. Social media content contributed “more than minimally” to Molly’s death.
However sad it is that a young girl had to lose her life to see things move in a more reassuring direction, things do seem to be moving…
For example, The National Society for the Prevention of Cruelty to Children (NSPCC) in the U.K. is calling for tech platforms to take responsibility for protecting young people. Prince William says online safety for young people should be “a prerequisite, not an afterthought”.
In a first for both Meta, which owns Instagram, and Pinterest, senior executives had to give evidence under oath in a court in the U.K. Source BBC
And if governments do struggle with the “safeguarding versus freedom of speech” argument, regulations are about to get, if not clearer, but stricter. Like the 2016 EU General Data Protection Regulation (GDPR), that came about after the suicide of Olive Cooke, a 92 year old pensioner overwhelmed by the 3000 donations requests she would receive in a year, the new regulations may take a while to be implemented but they will come with the serious risk of fines that could be as high as 8 to 10% of a business’s turnover.
All this to say that we can only hope for change.
The question is, who should and who will take preventive action?
So who’s responsible for helping tackle cyberbullying? Children?
Children and teens are very resourceful. They know about bullying and try to help each other. As digital natives, there is very little difference to them between the real and virtual world. They adopt platforms and apps and navigate between one or the other seamlessly. They know there is the potential for harm, but can’t be expected to spot signs of abuse and are even less likely to be in control of online interactions.
Data around digital safety and cyberbullying is sparse or at best inconsistent. But we all know how damaging it can be. Not every child can talk to their parents; not every parent is digitally savvy enough to understand.
They are children, young people, and yes, need educating, but can’t be held responsible if the world they inherit is unsafe.
Parents and educators?
We all know too well how difficult it is to keep up with the digital world. We as adults give away our data and accept that platforms will serve us content that we haven’t asked for But most importantly, we know that reasoning with a teenager is not such an easy task.
Simply put, parents, regulators and educators struggle to keep up.
While platforms claim to deploy ways of keeping users safe, the onus seems to remain on users to understand how to protect themselves. Teachers are expected to educate young people in digital citizenship; parents are expected to do the same while at the same time monitoring their child’s activity.
Of course coining terms such as Digital citizenship “the ability to safely and responsibly access digital technologies, as well as being an active and respectful member of society, both online and offline”. Source FutureLearn, is helpful.
The term and concept is becoming more widely spread and shared.
But for educators, it comes on top of existing curriculums and means yet another topic to teach, understand and master in the first place. So there is still a long way to go.
The platforms?
If you search for what platforms are doing to prevent bullying or harmful content, they happily present a mix of human and tech measures. Yet, examples of distressing content flourish, with extremely serious and even fatal consequences on users’ mental health and lives.
When Adam Mosseri joined as Head of Instagram in 2019 he announced his intention to get serious about safety on the platform, “We are in a pivotal moment,” “We want to lead the industry in this fight.” Source: Time. New features were tested but we’re yet to see any real progress.
Facebook and Instagram’s own research showed that use of the platform made body issues worse for girls. Yet they will not accept the overall impact of their product on the mental health of the youth. Source / TheGuardian
Systematically, platforms revert back to putting the responsibility on users to check privacy and safety features to protect themselves.
The financial interest of platforms plays a big role in not pushing the safety agenda so much, or it encourages them to make compromises and focus on the financial gains. They are businesses after all and have targets to meet.
But this doesn’t excuse not putting everything in place to protect the communities they help create and facilitate. Gone are the days where one could expect communities to self-regulate. And when it comes to young people, and despite their amazing peer-to-peer support ethos, they should expect to be kept safe from harm, to enjoy and benefit from what these communities can offer.
So if we are managing an online community, the arguments that good moderation practices will help keep our brands safe from reputational harm, or from the risk of being fined or worse for not meeting regulatory requirements should already be motivation enough to review moderation tools and processes. But by facilitating engagement, the creation and sharing of content, we are ultimately responsible for our users and moderators safety and this should be a given.
In short, the internet will only be a safer place if everyone takes responsibility and acts upon harmful content and behaviours within our remit. Governments need to take action, parents need to educate and platforms need to be implementing the safest possible solutions. Apologising isn’t enough.