AI Ethics Expert’s Corner : Kyle Dent, Head of AI Ethics

Kyle Dent Checkstep

This month we’ve added a new “Expert’s Corner” feature starting with an interview with our own Kyle Dent, who recently joined Checkstep. He answers questions about AI ethics and some of the challenges of content moderation.

AI Ethics FAQ with Kyle Dent


1. With your extensive work around AI ethics, how would you address the topic of efficiency & AI? Particularly when we see articles about AI Content moderation being better than human moderators?


We need to be skeptical of claims that AI performs better than humans. It’s been a common boast, especially since the newer bidirectional transformer models have come out, but the headlines leave out a lot of the caveats.

Content moderation, in particular, is very context dependent and I don’t think anyone would seriously argue that machines are better than humans at understanding the nuances of language. Having said that, AI is a powerful tool that is absolutely required for moderating content at any kind of scale. The trick is combining the individual strengths of human and machine intelligence together in a way that maximizes the efficiency of the overall process.


2. What is the most shocking news you’ve come across with respect to hate speech/misinformation/spam? How would you have addressed it?


Actually, I think hate speech and disinformation are themselves shocking, but now that we’ve moved most of our public discourse online, we’ve seen just how prevalent intolerance and hatred are. I’d have to say that the Pizzagate incident really woke me up to the extent of disinformation and also to the possibility of real-world harm from online disinformation. And, of course, it’s really obvious how much racial and other marginalized groups like LGBTQ populations suffer from hate speech.

The solution requires lots of us to be involved, and it’s going to take time, but we need to build up the structures and systems that allow quality information to dominate. There will still be voices that peddle misinformation and hate, but as we make progress hopefully those will retreat back to the fringes and become less effective weapons.


3. How has the dissemination of misinformation changed over time?


Yeah, that’s the thing, this is not the first time we as a society have had to deal with a very ugly information space. During the mid- to late-1800’s in the United States there was the rise of yellow journalism that was characterized by hysterical headlines, fabricated stories, and plenty of mudslinging. The penny papers of that day were only profitable because they could sell advertising and then reach lots of eyeballs.

All of which sounds a lot like today’s big social media companies. Add recommendation algorithms into today’s mix and the problem has become that much worse. We got out of that cycle only because people lost a taste for the extreme sensationalism, and journalists began to see themselves as stewards of objective and accurate information with an important role in democracy. It’s still not clear how we can make a similar transition today, but lots of us are working on it.

4. Where do you stand with respect to the repeal of Section 230?


As a matter of fact, I just read an article in Wired Magazine that has me rethinking Section 230. I still believe it wasn’t crazy at the time to treat online platforms as simple conduits for speech, but Gilad Edelman makes a very compelling argument that liability protection never had to be all or nothing. The U.S. courts are actually set up to make case-by-case decisions that over time form policy through the body of common law that results, which would have given us a much more nuanced treatment of platforms’ legal liability.

Edelman also says, and I agree with this, it would be a mistake to completely repeal Section 230 at this point. We can’t go back to 1996 when the case law would have developed in parallel with how we evolved in our use of social media. Section 230 definitely needs adjusting, because as things stand, it’s too much of a shield for platforms that benefit from purposely damaging content like sexual privacy invasion and defamation. The key thing to any changes, though, is that they don’t overly burden small companies and give even more advantage to the big tech platforms who have the resources to navigate a new legal landscape.

5. Why the move from a tenured corporate career to a small startup?


You sound like my mother. (Just kidding, she’s actually very happy for me.) I’m mainly really excited to be focused on AI Ethics, especially the problem of disinformation and dealing with toxic content online. I think we’re doing great things at Checkstep, and I’m very happy to be contributing in some way to developing the quality information space the world needs so badly.

If you would like to catch up on other thought leadership pieces by Kyle, click here.

An edited version of this story originally appeared in The Checkstep Round-up https://checkstep.substack.com/p/anti-hate-action-legislative-activity

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

The Evolution of Content Moderation Rules Throughout The Years

The birth of the digital public sphere This article is contributed by Ahmed Medien. Online forums and social marketplaces have become a large part of the internet in the past 20 years since the early bulletin boards on the internet and AOL chat rooms. Today, users moved primarily to social platforms, platforms that host user-generated content. These…
7 minutes

What is Doxxing: A Comprehensive Guide to Protecting Your Online Privacy

Today, protecting our online privacy has become increasingly important. One of the most concerning threats we face is doxxing. Derived from the phrase "dropping documents," doxxing refers to the act of collecting and exposing an individual's private information, with the intention of shaming, embarrassing, or even endangering them. This malicious practice has gained traction in…
7 minutes

How AI is Revolutionizing Content Moderation in Social Media Platforms

Social media platforms have become an integral part of our lives, connecting us with friends, family, and the world at large. Still, with the exponential growth of user-generated content, ensuring a safe and positive user experience has become a daunting task. This is where Artificial Intelligence (AI) comes into play, revolutionizing the way social media…
3 minutes

Moderation Strategies for Decentralised Autonomous Organisations (DAOs)

Decentralised Autonomous Organizations (DAOs) are a quite recent organisational structure enabled by blockchain technology. They represent a complete structural shift in how groups organise and make decisions, leveraging decentralised networks and smart contracts to facilitate collective governance and decision-making without a centralised authority. The concept of DAOs emerged in 2016 with the launch of "The…
6 minutes

What is Content Moderation ? 

Content moderation is the strategic process of evaluating, filtering, and regulating user-generated content on digital ecosystems. It plays a crucial role in fostering a safe and positive user experience by removing or restricting content that violates community guidelines, is harmful, or could offend users. An effective moderation system is designed to strike a delicate balance…
5 minutes

How to Launch a Successful Career in Trust and Safety‍

Before diving into the specifics of launching a career in Trust and Safety, it's important to have a clear understanding of what this field entails. Trust and Safety professionals are responsible for maintaining a safe and secure environment for users on digital platforms. This includes identifying and addressing harmful content, developing policies to prevent abuse,…
5 minutes

The Impact of Trust and Safety in Marketplaces

Nowadays, its no surprise that an unregulated marketplace with sketchy profiles, violent interactions, scams, and illegal products is doomed to fail. In the current world of online commerce, trust and safety are essential, and if users don't feel comfortable, they won’t buy. As a marketplace owner, ensuring that your platform is a safe and reliable…
9 minutes

How to Keep your Online Community Abuse-Free

The Internet & Community Building In the past, if you were really into something niche, finding others who shared your passion in your local area was tough. You might have felt like you were the only one around who had that particular interest. But things have changed a lot since then. Now, thanks to the…
6 minutes

Text Moderation: Scale your content moderation with AI

In today's interconnected world, text-based communication has become a fundamental part of our daily lives. However, with the exponential growth of user-generated text content on digital platforms, ensuring a safe and inclusive online environment has become a daunting task. Text moderation plays a critical role in filtering and managing user-generated content to prevent harmful or…
4 minutes

How Predators Are Abusing Generative AI

The recent rise of generative AI has revolutionized various industries, including Trust and Safety. However, this technological advancement generates new problems. Predators have found ways to abuse generative AI, using it to carry out horrible acts such as child sex abuse material (CSAM), disinformation, fraud, and extremism. In this article, we will explore how predators…
4 minutes

Content Moderation for Virtual Reality

What is content moderation in virtual reality? Content moderation in virtual reality (VR) is the process of monitoring and managing user-generated content within VR platforms to make sure it meets certain standards and guidelines. This can include text, images, videos, and any actions within the 3D virtual environment. Given the interactive and immersive nature of…
31 minutes

Why moderation has become essential for UGC 

User-Generated Content (UGC) has become an integral part of online participation. Any type of material—whether it's text, photos, videos, reviews, or discussions—that is made and shared by people instead of brands or official content providers is called user-generated content. Representing variety and honesty, it is the online community's collective voice. Let's explore user-generated content (UGC)…
6 minutes

Audio Moderation: AI-Driven Strategies to Combat Online Threats

In today's digitally-driven world, audio content has become an integral part of online platforms, ranging from podcasts and audiobooks to user-generated audio clips on social media. With the increasing volume of audio content being generated daily, audio moderation has become a critical aspect of maintaining a safe and positive user experience. Audio moderation involves systematically…
4 minutes

‍The Future of Dating: Embracing Video to Connect and Thrive

In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In…
4 minutes

Live Chat Content Moderation Guide

During any live streaming nowadays, whether it be a content creator on Youtube, an influencer on Instagram, or even live sports in some cases, there's always some sort of live chat. These are public commentary sections where viewers can interact and share their thoughts and opinions, but depending on which event or what sort of…
6 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert