fbpx

AI Ethics Expert’s Corner : Kyle Dent, Head of AI Ethics

Kyle Dent Checkstep

This month we’ve added a new “Expert’s Corner” feature starting with an interview with our own Kyle Dent, who recently joined Checkstep. He answers questions about AI ethics and some of the challenges of content moderation.

AI Ethics FAQ with Kyle Dent


1. With your extensive work around AI ethics, how would you address the topic of efficiency & AI? Particularly when we see articles about AI Content moderation being better than human moderators?


We need to be skeptical of claims that AI performs better than humans. It’s been a common boast, especially since the newer bidirectional transformer models have come out, but the headlines leave out a lot of the caveats.

Content moderation, in particular, is very context dependent and I don’t think anyone would seriously argue that machines are better than humans at understanding the nuances of language. Having said that, AI is a powerful tool that is absolutely required for moderating content at any kind of scale. The trick is combining the individual strengths of human and machine intelligence together in a way that maximizes the efficiency of the overall process.


2. What is the most shocking news you’ve come across with respect to hate speech/misinformation/spam? How would you have addressed it?


Actually, I think hate speech and disinformation are themselves shocking, but now that we’ve moved most of our public discourse online, we’ve seen just how prevalent intolerance and hatred are. I’d have to say that the Pizzagate incident really woke me up to the extent of disinformation and also to the possibility of real-world harm from online disinformation. And, of course, it’s really obvious how much racial and other marginalized groups like LGBTQ populations suffer from hate speech.

The solution requires lots of us to be involved, and it’s going to take time, but we need to build up the structures and systems that allow quality information to dominate. There will still be voices that peddle misinformation and hate, but as we make progress hopefully those will retreat back to the fringes and become less effective weapons.


3. How has the dissemination of misinformation changed over time?


Yeah, that’s the thing, this is not the first time we as a society have had to deal with a very ugly information space. During the mid- to late-1800’s in the United States there was the rise of yellow journalism that was characterized by hysterical headlines, fabricated stories, and plenty of mudslinging. The penny papers of that day were only profitable because they could sell advertising and then reach lots of eyeballs.

All of which sounds a lot like today’s big social media companies. Add recommendation algorithms into today’s mix and the problem has become that much worse. We got out of that cycle only because people lost a taste for the extreme sensationalism, and journalists began to see themselves as stewards of objective and accurate information with an important role in democracy. It’s still not clear how we can make a similar transition today, but lots of us are working on it.

4. Where do you stand with respect to the repeal of Section 230?


As a matter of fact, I just read an article in Wired Magazine that has me rethinking Section 230. I still believe it wasn’t crazy at the time to treat online platforms as simple conduits for speech, but Gilad Edelman makes a very compelling argument that liability protection never had to be all or nothing. The U.S. courts are actually set up to make case-by-case decisions that over time form policy through the body of common law that results, which would have given us a much more nuanced treatment of platforms’ legal liability.

Edelman also says, and I agree with this, it would be a mistake to completely repeal Section 230 at this point. We can’t go back to 1996 when the case law would have developed in parallel with how we evolved in our use of social media. Section 230 definitely needs adjusting, because as things stand, it’s too much of a shield for platforms that benefit from purposely damaging content like sexual privacy invasion and defamation. The key thing to any changes, though, is that they don’t overly burden small companies and give even more advantage to the big tech platforms who have the resources to navigate a new legal landscape.

5. Why the move from a tenured corporate career to a small startup?


You sound like my mother. (Just kidding, she’s actually very happy for me.) I’m mainly really excited to be focused on AI Ethics, especially the problem of disinformation and dealing with toxic content online. I think we’re doing great things at Checkstep, and I’m very happy to be contributing in some way to developing the quality information space the world needs so badly.

If you would like to catch up on other thought leadership pieces by Kyle, click here.

An edited version of this story originally appeared in The Checkstep Round-up https://checkstep.substack.com/p/anti-hate-action-legislative-activity

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Fake Dating Images: Your Ultimate Moderation Guide

Introduction: Combatting fake dating images to protect your platform With growing number of user concerns highlighting fake dating images to mislead users, dating platforms are facing a growing challenge. These pictures are not only a threat to dating platform's integrity but it also erodes user trusts and exposes companies to reputational and compliance risks. In…
5 minutes

Future Technologies : The Next Generation of AI in Content Moderation 

With the exponential growth of user-generated content on various platforms, the task of ensuring a safe and compliant online environment has become increasingly complex. As we look toward the future, emerging technologies, particularly in the field of artificial intelligence (AI), are poised to revolutionize content moderation and usher in a new era of efficiency and…
3 minutes

How to deal with Fake Dating Profiles on your Platform

Have you seen an increase in fake profiles on your platform? Are you concerned about it becoming a wild west? In this article, we’ll dive into how to protect users from encountering bad actors and create a safer environment for your customers. An Introduction to the Issue Dating apps have transformed the way people interact…
5 minutes

Trust and Safety Teams: Ensuring User Protection

As the internet becomes an integral part of our daily lives, companies must prioritize the safety and security of their users. This responsibility falls on trust and safety teams, whose primary goal is to protect users from fraud, abuse, and other harmful behavior.  Trust and Safety Teams Objectives  The Role of Trust and Safety Teams…
6 minutes

The Impact of AI Content Moderation on User Experience and Engagement

User experience and user engagement are two critical metrics that businesses closely monitor to understand how their products, services, or systems are being received by customers. Now that user-generated content (UGC) is on the rise, content moderation plays a main role in ensuring a safe and positive user experience. Artificial intelligence (AI) has emerged as…
4 minutes

Building Trust and Safety Online: The Power of AI Content Moderation in Community Forums

Community forums are providing spaces for individuals to connect, share ideas, and build relationships. However, maintaining a safe and welcoming environment in these forums is crucial for fostering trust and ensuring the well-being of community members. To address this challenge, many forums are turning to the power of artificial intelligence (AI) content moderation. In this…
3 minutes

Blowing the Whistle on Facebook

Wondering what all the fuss is around the Facebook Papers? Get the lowdown here. A large trove of recently leaked documents from Meta/Facebook promises to keep the social platform in the news, and in hot water, for some time to come. While other recent “Paper” investigations (think Panama and Paradise) have revealed fraud, tax evasion,…
7 minutes

The Role of a Content Moderator: Ensuring Safety and Integrity in the Digital World

In today's digital world, the role of a content moderator is central to ensuring the safety and integrity of online platforms. Content moderators are responsible for reviewing and moderating user-generated content to ensure that it complies with the platform's policies and guidelines, and the laws and regulations. Their work is crucial in creating a safe…
5 minutes

Emerging Threats in AI Content Moderation : Deep Learning and Contextual Analysis 

With the rise of user-generated content across various platforms, artificial intelligence (AI) has played a crucial role in automating the moderation process. However, as AI algorithms become more sophisticated, emerging threats in content moderation are also on the horizon. This article explores two significant challenges: the use of deep learning and contextual analysis in AI…
4 minutes

Educational Content: Enhancing Online Safety with AI

The internet has revolutionized the field of education, offering new resources and opportunities for learning. With the increased reliance on online platforms and digital content, it is now a priority to ensure the safety and security of educational spaces. This is where artificial intelligence (AI) plays a big role. By using the power of AI,…
3 minutes

Checkstep Raises $1.8M Seed Funding to Combat Online Toxicity

Early stage startup gets funding for R&D effort to develop advanced content moderation technology We’re thrilled to announce that Checkstep recently closed a $1.8m seed funding round to further develop our advanced AI product offering contextual content moderation. The round was carefully selected to be diverse, international, and with a significant added value to our business. Influential personalities…
3 minutes

Unmasking Fake Dating Sites: How to Spot and Avoid Scams

In today's digital age, online dating has become increasingly popular, especially with the COVID-19 pandemic limiting traditional in-person interactions. Unfortunately, scammers have taken advantage of this trend, creating fake dating sites to exploit vulnerable individuals. These fraudulent platforms not only deceive users but also put their personal information and finances at risk. In this article,…
5 minutes

Customizing AI Content Moderation for Different Industries and Platforms

With the exponential growth of user-generated content across various industries and platforms, the need for effective and tailored content moderation solutions has never been more apparent. Artificial Intelligence (AI) plays a major role in automating content moderation processes, but customization is key to address the unique challenges faced by different industries and platforms. Understanding Industry-Specific…
3 minutes

The Importance of Scalability in AI Content Moderation

Content moderation is essential to maintain a safe and positive online environment. With the exponential growth of user-generated content on various platforms, the need for scalable solutions has become crucial. Artificial Intelligence (AI) has emerged as a powerful tool in content moderation but addressing scalability is still a challenge. In this article, we will explore…
3 minutes

The Evolution of Content Moderation Rules Throughout The Years

The birth of the digital public sphere This article is contributed by Ahmed Medien. Online forums and social marketplaces have become a large part of the internet in the past 20 years since the early bulletin boards on the internet and AOL chat rooms. Today, users moved primarily to social platforms, platforms that host user-generated content. These…
7 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert