fbpx

The Effects of Unregulated Content for Gen Z

The Internet as an Irreplaceable Tool

Gen Z’s are the first generation to be born in a world where the internet plays an irreplaceable role, and in some way, these children and adolescents are not just consumers but have become inhabitants of the digital society. Apart from school, generation Z spends most of their time on their cellphone, computer, and videogame console, more than they do outside playing sports or any other activity that doesn’t require a technological device. Their fluency with technology surpasses that of prior generations, and this is not a “bad thing” in a world that is always evolving and pumping out new technologies. However, this unprecedented access to the internet exposes them to an almost infinite realm of experiences, both positive and negative.

Social Media

From primary school to university, many kids, adolescents, and young adults use social media daily. Apps like Instagram, Snapchat, and TikTok have emerged and really resonated with Gen Z’s, so much so that they have become the cornerstone of modern connectivity, enabling constant interaction and cultivating relationships among young users. They offer a space to chat with friends, share bits of their lives, find new hobbies and interests, stay informed about their favourite bands and artists, and so much more.

Video Games

Video games have also become a big pillar of Gen Z’s lives, especially among the male population. These forms of entertainment have evolved from their precarious beginnings with games such as Pong, Pacman, and Mario Bros, which were single-player games, to more modern and complex environments. Games like Call of Duty, Fortnite, and Minecraft have transcended mere entertainment, morphing into immersive experiences that stimulate skill development, strategic thinking, and social camaraderie. They offer a dynamic platform where friends and strangers can play together, interact with each other to improve their characters, complete campaigns, and ultimately level up. 

Videoconferencing Classes

The introduction of video conferencing for educational purposes has been revolutionary, particularly in unforeseen circumstances like the COVID-19 pandemic. It has ensured uninterrupted learning when physical attendance at school wasn’t feasible, and it wouldn’t have been possible without the students expert knowledge of technology. This same expert knowledge was what separated them from many of their teachers, who weren’t ready to make videoconferencing one of the main spaces in which to spend their time. It is this disparity in skill that leads to some of the challenges that will be covered next. 

What’s negative about all of this?

While all of these environments can lead to positive experiences and improvements, they can also lead to extremely dark outcomes, especially for younger demographics. Let’s take social media, for example. It’s super fun; it lets them keep up with family, the world, and more, but it can also leave them vulnerable to fake profiles run by bad actors, harassment, offensive comments in their pictures, explicit content, and many others.

Video games can be a great tool for cultivating strategic thinking and comradery, but they can also morph into a hub for hate. Chats with stragers can lead to kids encountering offensive language, aggressive players, and more.

And last, because of the disparity between the teacher’s knowledge of technology and the student’s, kids are vulnerable to cyberbullying through chat groups and online harassment from other kids.

Its clear by now that the unregulated nature of these digital spaces poses significant risks to young users.

What’s the solution?

If children are the concern, parents can employ many strategies to create a safe digital environment. Implementing app restrictions and managing screen time are effective measures. By utilising parental control apps or device settings, parents can limit access to certain apps or websites deemed inappropriate for their child’s age or maturity level. Setting time limits encourages a healthy balance between screen time and other activities, promoting overall well-being.

But what about online classes? Or what about adolescents and young adults? This is where this type of protection comes short, and promoting awareness of content moderation technologies for tech companies is needed.

The Crucial Role of Content Moderation

What is content moderation, and how can it protect Generation Z? We define content moderation as “the strategic process of evaluating, filtering, and regulating user-generated content on digital ecosystems” in one of our latest articles that you can check out by clicking here.

Implementing efficient moderation tools, especially AI-driven solutions, becomes critical in shielding young users from potential harm. AI-based moderation offers real-time identification and mitigation of harmful content, serving as a proactive shield against explicit imagery, offensive language, and cyberbullying.

Empowering platforms with effective content moderation tools is crucial for creating safer online environments for children and adolescents. Increasing awareness and advocating for their integration can catalyse the development of more secure digital spaces, ensuring the well-being and growth of young users.

Conclusion

The internet has unequivocally embedded itself into Generation Z’s life, providing new opportunities and difficulties. Kids, adolescents, and young adults face both rewarding experiences and possible threats as they navigate social media, video games, and online schooling. 

While social media helps people connect with each other, it also exposes them to cyberbullying and inappropriate content. Video games, on the other hand, are entertaining and can work as a tool to learn strategic thinking, but they also give people a place to say hurtful things and be violent. Lastly, the rapid adoption of videoconferencing in education has emphasised the gap between students’ technological ability and teachers’ preparation, leaving young minds vulnerable to cyberbullying.

Ultimately, raising awareness and advocating for comprehensive content moderation mechanisms are critical steps towards creating safer digital environments for the current and next generations. AI-based moderation has the ability to recognise, report, and censor offensive comments, harassment, and bad actors before they get to interact with people, making it an irreplaceable tool for navigating the internet.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Building a trusted and authentic social shopping experience: Bloop partners with Checkstep for comprehensive moderation and compliance solutions

The fast-growing ecommerce startup Bloop has partnered with Checkstep to safeguard the user-generated content (UGC) on its new social shopping platform, ensuring Trust and Safety for users. About Bloop Bloop is reshaping the social shopping landscape by rewarding real consumer influence. Bloop combines the best elements of social networking and marketplace platforms. The team aims…
4 minutes

AI Ethics Expert’s Corner : Kyle Dent, Head of AI Ethics

This month we’ve added a new “Expert’s Corner” feature starting with an interview with our own Kyle Dent, who recently joined Checkstep. He answers questions about AI ethics and some of the challenges of content moderation. AI Ethics FAQ with Kyle Dent If you would like to catch up on other thought leadership pieces by…
4 minutes

Misinformation Expert’s Corner : Preslav Nakov, AI and Fake News

Preslav Nakov has established himself as one of the leading experts on the use of AI against propaganda and disinformation. He has been very influential in the field of natural language processing and text mining, publishing hundreds of peer reviewed research papers. He spoke to us about his work dealing with the ongoing problem of…
8 minutes

Checkstep Raises $1.8M Seed Funding to Combat Online Toxicity

Early stage startup gets funding for R&D effort to develop advanced content moderation technology We’re thrilled to announce that Checkstep recently closed a $1.8m seed funding round to further develop our advanced AI product offering contextual content moderation. The round was carefully selected to be diverse, international, and with a significant added value to our business. Influential personalities…
3 minutes

Expert’s Corner with Checkstep CEO Guillaume Bouchard

This month’s expert is Checkstep’s CEO and Co-Founder Guillaume Bouchard. After exiting his previous company, Bloomsbury AI to Facebook, he’s on a mission to better prepare online platforms against all types of online harm. He has a PhD in applied mathematics and machine learning from INRIA, France. 12 years of scientific research experience at Xerox…
3 minutes

Expert’s Corner with Community Building Expert Todd Nilson

Checkstep interviews expert in online community building Todd Nilson leads transformational technology projects for major brands and organizations. He specializes in online communities, digital workplaces, social listening analysis, competitive intelligence, game thinking, employer branding, and virtual collaboration. Todd has managed teams and engagements with national and global consultancy firms specialized in online communities and the…
7 minutes

Blowing the Whistle on Facebook

Wondering what all the fuss is around the Facebook Papers? Get the lowdown here. A large trove of recently leaked documents from Meta/Facebook promises to keep the social platform in the news, and in hot water, for some time to come. While other recent “Paper” investigations (think Panama and Paradise) have revealed fraud, tax evasion,…
7 minutes

Expert’s Corner with Head of Research Isabelle Augenstein

This month we were very happy to sit down with one of the brains behind Checkstep who is also a recognized talent among European academics. She is the co-head of research at Checkstep and also an associate professor at the University of Copenhagen. She currently holds a prestigious DFF Sapere Aude Research Leader fellowship on ‘Learning to…
5 minutes

The Ultimate Guide to GenAI Moderation x Sightengine

Map your GenAI risks and craft “AI-resilient” policies [Part 1] GenAI presents significant challenge for platforms and the Trust & Safety field. As we head into 2025, AI-generated content and detection advancements are poised to take center stage. This post is part of a two-part blog series, co-authored with our partner Sightengine, exploring innovative strategies and…
12 minutes

Checkstep’s 2024: A Year of Innovation, Impact, and Safety

As we’re working on 2025’s OKR, we’re reflecting at Checkstep an an incredible year, shipping new product features faster than ever and positive feedback from our early adopters, giving us the feeling that our platform now ready to scale and will prepare our users to benefit from the next big wave of AI agents. We…
4 minutes

Minor protection : 3 updates you should make to comply with DSA provisions

Introduction While the EU already has some rules to protect children online, such as those found in the Audiovisual Media Services Directive, the Digital Services Act (DSA) introduces specific obligations for platforms. As platforms adapt to meet the provisions outlined in the DSA Minor Protection, it's important for businesses to take proactive measures to comply…
5 minutes

Trust and Safety Regulations: A Comprehensive Guide [+Free Cheat Sheet]

Introduction In today’s digital landscape, trust, and safety are paramount concerns for online businesses, particularly those dealing with user-generated content. Trust and Safety regulations are designed to safeguard users, ensure transparency, and foster a secure online environment. These regulations are crucial for maintaining user confidence and protecting against online threats. In addition, as global concerns…
8 minutes

Supercharge Trust & Safety: Keyword Flagging & More in Checkstep’s Latest Updates

We’ve been busy updating and adding new features to our Trust & Safety platform. Check out some of the latest release announcements from Checkstep! Improved Abilities to Live Update your Trust & Safety workflows Trust and Safety operations are always evolving and new forms of violating content pop up in new ways. It’s critical that…
3 minutes

Digital Services Act (DSA) Transparency Guide [+Free Templates]

The Digital Services Act (DSA) is a comprehensive set of laws that aims to regulate digital services and platforms to ensure transparency, accountability, and user protection. In other words, it’s the European Union’s way of regulating and harmonizing separate laws under one universal piece of legislation to prevent illegal and harmful activities online and the…
7 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert