This month we’ve added a new “Expert’s Corner” feature starting with an interview with our own Kyle Dent, who recently joined Checkstep. He answers questions about AI ethics and some of the challenges of content moderation.
AI Ethics FAQ with Kyle Dent
1. With your extensive work around AI ethics, how would you address the topic of efficiency & AI? Particularly when we see articles about AI Content moderation being better than human moderators?
We need to be skeptical of claims that AI performs better than humans. It’s been a common boast, especially since the newer bidirectional transformer models have come out, but the headlines leave out a lot of the caveats.
Content moderation, in particular, is very context dependent and I don’t think anyone would seriously argue that machines are better than humans at understanding the nuances of language. Having said that, AI is a powerful tool that is absolutely required for moderating content at any kind of scale. The trick is combining the individual strengths of human and machine intelligence together in a way that maximizes the efficiency of the overall process.
2. What is the most shocking news you’ve come across with respect to hate speech/misinformation/spam? How would you have addressed it?
Actually, I think hate speech and disinformation are themselves shocking, but now that we’ve moved most of our public discourse online, we’ve seen just how prevalent intolerance and hatred are. I’d have to say that the Pizzagate incident really woke me up to the extent of disinformation and also to the possibility of real-world harm from online disinformation. And, of course, it’s really obvious how much racial and other marginalized groups like LGBTQ populations suffer from hate speech.
The solution requires lots of us to be involved, and it’s going to take time, but we need to build up the structures and systems that allow quality information to dominate. There will still be voices that peddle misinformation and hate, but as we make progress hopefully those will retreat back to the fringes and become less effective weapons.
3. How has the dissemination of misinformation changed over time?
Yeah, that’s the thing, this is not the first time we as a society have had to deal with a very ugly information space. During the mid- to late-1800’s in the United States there was the rise of yellow journalism that was characterized by hysterical headlines, fabricated stories, and plenty of mudslinging. The penny papers of that day were only profitable because they could sell advertising and then reach lots of eyeballs.
All of which sounds a lot like today’s big social media companies. Add recommendation algorithms into today’s mix and the problem has become that much worse. We got out of that cycle only because people lost a taste for the extreme sensationalism, and journalists began to see themselves as stewards of objective and accurate information with an important role in democracy. It’s still not clear how we can make a similar transition today, but lots of us are working on it.
As a matter of fact, I just read an article in Wired Magazine that has me rethinking Section 230. I still believe it wasn’t crazy at the time to treat online platforms as simple conduits for speech, but Gilad Edelman makes a very compelling argument that liability protection never had to be all or nothing. The U.S. courts are actually set up to make case-by-case decisions that over time form policy through the body of common law that results, which would have given us a much more nuanced treatment of platforms’ legal liability.
Edelman also says, and I agree with this, it would be a mistake to completely repeal Section 230 at this point. We can’t go back to 1996 when the case law would have developed in parallel with how we evolved in our use of social media. Section 230 definitely needs adjusting, because as things stand, it’s too much of a shield for platforms that benefit from purposely damaging content like sexual privacy invasion and defamation. The key thing to any changes, though, is that they don’t overly burden small companies and give even more advantage to the big tech platforms who have the resources to navigate a new legal landscape.
You sound like my mother. (Just kidding, she’s actually very happy for me.) I’m mainly really excited to be focused on AI Ethics, especially the problem of disinformation and dealing with toxic content online. I think we’re doing great things at Checkstep, and I’m very happy to be contributing in some way to developing the quality information space the world needs so badly.
If you would like to catch up on other thought leadership pieces by Kyle, click here.
An edited version of this story originally appeared in The Checkstep Round-up https://checkstep.substack.com/p/anti-hate-action-legislative-activity