fbpx

Expert’s Corner with Head of Research Isabelle Augenstein

This month we were very happy to sit down with one of the brains behind Checkstep who is also a recognized talent among European academics. She is the co-head of research at Checkstep and also an associate professor at the University of Copenhagen.

She currently holds a prestigious DFF Sapere Aude Research Leader fellowship on ‘Learning to Explain Attitudes on Social Media’, and was also recently admitted to the Young Royal Danish Academy of Sciences and Letters.

In this interview she explains how she got involved in the Trust and Safety space and the important role researchers play in finding solutions to all manner of online harms.

  1. What made you get involved in the online Trust and Safety space?

First and foremost, online harms present a substantial societal problem — platforms are rife with abusive language, sexism, misinformation etc. Short of bringing about a cultural change, what one can realistically do to improve the status quo is to semi-automatically moderate harmful content. This, in itself, is very challenging from a technical point of view, which, as a researcher, I find exciting.

2. People tend to trust content that resonates with their beliefs or generally helps address some of their doubts. Thus, making it easy for misinformation propagators to easily target potential “victims”. Given your extensive work around fact-checking, do you think debunking some of these claims is the best way to keep society better informed?

Yes, prior work in psychology shows that it is very difficult to change peoples’ core beliefs and values. Thus, it is important to detect disinformation as early as possible before it spreads, and to provide automatic fact checks to content moderators for this purpose.

3. Better understanding the context of certain conversations helps us to put things into perspective, especially when it comes to social media. Could you tell us a little bit about your research project EXPANSE, that aims to explain attitudes on social media?

EXPANSE is a research leader fellowship I recently obtained (more information here: https://dff.dk/en/grants/copy_of_research-leaders-2020/researchleader-14).The project itself started on October 1, 2021, so there are unfortunately not so many results to share yet. Very briefly about the goals of the project though: the overarching aim is to be able to detect attitudes automatically (also called stance detection), but do so in a much more fine-grained and transparent way than is possible today. The key innovation is to imbue stance detection models with sociological knowledge, as I hypothesize they can shed a light on why people hold certain attitudes, and thus lead to more insightful automatically generated explanations.

4. Content moderation is a growing concern, given the recent infodemic. However, some criticize it as a means to suppress freedom of speech. How should content moderation companies position themselves? What are some of the areas they should focus on, to ensure online safety?

The concept of freedom of speech has existed since the 6th century BC, long before social media or even print media were invented. Before social media, it was much more challenging to spread and weaponize information — whereas now, everyone with access to the internet can do so, anonymously and with few repercussions. This format, by design, brings out the worst in people — things people would never feel comfortable saying to someone’s face, they feel comfortable writing in an anonymous online forum. The filter bubble effect means people additionally receive backing on their opinions from like-minded individuals. This means, in this day and age, one needs to very carefully weigh freedom of speech up against the real harms it can cause. One area I find particularly concerning is the negative impact of social media, especially image-sharing platforms, on depression and suicide among teenagers due to the distorted views of reality presented by many users on these platforms, including related to body image and lifestyle. I think a careful audit of such platforms is needed to address this problem more holistically, but content moderation can at least help to identify particularly harmful information, such as posts glorifying anorexia.

5. How can AI be applied to help with the problem? How does AI explainability help?

AI-based content moderation solutions can help identify harmful content before it even reaches users. Explainable AI can be useful in two ways: one, it can help continually improve content moderation models by identifying why they sometimes make mistakes, and two, especially for knowledge-intensive tasks such as automatic fact checking, they can provide content moderators with more information about why a model arrived at a prediction to make it easier to manually verify if the prediction is reliable.

6. The news is full of how bad actors propagate misinformation and also how platforms seem to exacerbate the problem. What role do you see for academics in addressing these problems?

Academics can provide crucial insights into why this phenomenon occurs, as well as potential solutions to the problem. For misinformation specifically, academics from many different disciplines have important and complementary research findings, which should be taken into account — e.g. from psychology, about the perception of misinformation; from computer science, about how to develop automatic content moderation solutions; from law, about how legislation applies to online platforms in different countries.
Academics can thus help inform online platform developers on how to make platforms safer for everyone, content moderation companies on how to automatically detect harmful content, and, perhaps most crucially, decision makers in governments on how to develop new legislation related to online harms.

PS. Something to look out for –

Isabelle’s higher doctoral dissertation defence, for earning the title of Doctor Scientiarum, on “Towards Explainable Fact Checking” on 6 December: https://di.ku.dk/begivenhedsmappe/begivenheder-2021/doctorate-defence-by-isabelle-augenstein/

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Audio Moderation: AI-Driven Strategies to Combat Online Threats

In today's digitally-driven world, audio content has become an integral part of online platforms, ranging from podcasts and audiobooks to user-generated audio clips on social media. With the increasing volume of audio content being generated daily, audio moderation has become a critical aspect of maintaining a safe and positive user experience. Audio moderation involves systematically…
4 minutes

‍The Future of Dating: Embracing Video to Connect and Thrive

In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In…
4 minutes

Live Chat Content Moderation Guide

During any live streaming nowadays, whether it be a content creator on Youtube, an influencer on Instagram, or even live sports in some cases, there's always some sort of live chat. These are public commentary sections where viewers can interact and share their thoughts and opinions, but depending on which event or what sort of…
6 minutes

3 Facts you Need to Know about Content Moderation and Dating Going into 2024

What is Content Moderation? Content moderation is the practice of monitoring and managing user-generated content on digital platforms to ensure it complies with community guidelines, legal standards, and ethical norms. This process aims to create a safe and inclusive online environment by preventing the spread of harmful, offensive, or inappropriate content. The rise of social…
6 minutes

Video Moderation : It’s Scale or Fail with AI

In the digital age, video content has become a driving force across online platforms, shaping the way we communicate, entertain, and share experiences. With this exponential growth, content moderation has become a critical aspect of maintaining a safe and inclusive online environment. The sheer volume of user-generated videos poses significant challenges for platforms, necessitating advanced…
4 minutes

What is Content Moderation: a Guide

Content moderation is one of the major aspect of managing online platforms and communities. It englobes the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. In this article, we'll provide you with a comprehensive glossary to understand the key concepts, as well as its definition, challenges and…
15 minutes

The Effects of Unregulated Content for Gen Z

The Internet as an Irreplaceable Tool Gen Z’s are the first generation to be born in a world where the internet plays an irreplaceable role, and in some way, these children and adolescents are not just consumers but have become inhabitants of the digital society. Apart from school, generation Z spends most of their time…
5 minutes

How to use Content Moderation to Build a Positive Brand Image

The idea of reputation has changed dramatically in the digital age, moving from conventional word-of-mouth to the wide world of user-generated material on the internet. Reputation has a long history that reflects changes in communication styles, cultural developments, and technological advancements. The importance of internet reviews has been highlighted by recent research conducted by Bright…
5 minutes

Expert’s Corner with Community Building Expert Todd Nilson

Checkstep interviews expert in online community building Todd Nilson leads transformational technology projects for major brands and organizations. He specializes in online communities, digital workplaces, social listening analysis, competitive intelligence, game thinking, employer branding, and virtual collaboration. Todd has managed teams and engagements with national and global consultancy firms specialized in online communities and the…
7 minutes

Podcast Moderation at Scale: Leveraging AI to Manage Content

The podcasting industry has experienced an explosive growth in recent years, with millions of episodes being published across various platforms every day. As the volume of audio content surges, ensuring a safe and trustworthy podcast environment becomes a paramount concern. Podcast moderation plays a crucial role in filtering and managing podcast episodes to prevent the…
4 minutes

17 Questions Trust and Safety Leaders Should Be Able to Answer 

A Trust and Safety leader plays a crucial role in ensuring the safety and security of a platform or community. Here are 17 important questions that a Trust and Safety leader should be able to answer.  What are the key goals and objectives of the Trust and Safety team? The key goals of the Trust…
6 minutes

How to Protect Online Food Delivery Users: The Critical Role of Moderation

Nowadays, most people can’t remember the last time they called a restaurant and asked for their food to be delivered. In fact, most people can’t recall the last time they called a restaurant for anything. In this new era of convenience, food delivery has undergone a revolutionary transformation. What once involved a phone call to…
5 minutes

The longest 20 days and 20 nights: how can Trust & Safety Leaders best prepare for US elections

Trust and Safety leaders during the US elections: are you tired of election coverage and frenzied political discussion yet? It’s only 20 days until the US votes to elect either Kamala Harris or Donald Trump into the White House and being a Trust and Safety professional has never been harder. Whether your site has anything…
5 minutes

Expert’s Corner with Checkstep CEO Guillaume Bouchard

This month’s expert is Checkstep’s CEO and Co-Founder Guillaume Bouchard. After exiting his previous company, Bloomsbury AI to Facebook, he’s on a mission to better prepare online platforms against all types of online harm. He has a PhD in applied mathematics and machine learning from INRIA, France. 12 years of scientific research experience at Xerox…
3 minutes

Ready or Not, AI Is Coming to Content Moderation

As digital platforms and online communities continue to grow, content moderation becomes increasingly critical to ensure safe and positive user experiences. Manual content moderation by human moderators is effective but often falls short when dealing with the scale and complexity of user-generated content. Ready or not, AI is coming to content moderation operations, revolutionizing the…
5 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert