fbpx

Expert’s Corner with Head of Research Isabelle Augenstein

This month we were very happy to sit down with one of the brains behind Checkstep who is also a recognized talent among European academics. She is the co-head of research at Checkstep and also an associate professor at the University of Copenhagen.

She currently holds a prestigious DFF Sapere Aude Research Leader fellowship on ‘Learning to Explain Attitudes on Social Media’, and was also recently admitted to the Young Royal Danish Academy of Sciences and Letters.

In this interview she explains how she got involved in the Trust and Safety space and the important role researchers play in finding solutions to all manner of online harms.

  1. What made you get involved in the online Trust and Safety space?

First and foremost, online harms present a substantial societal problem — platforms are rife with abusive language, sexism, misinformation etc. Short of bringing about a cultural change, what one can realistically do to improve the status quo is to semi-automatically moderate harmful content. This, in itself, is very challenging from a technical point of view, which, as a researcher, I find exciting.

2. People tend to trust content that resonates with their beliefs or generally helps address some of their doubts. Thus, making it easy for misinformation propagators to easily target potential “victims”. Given your extensive work around fact-checking, do you think debunking some of these claims is the best way to keep society better informed?

Yes, prior work in psychology shows that it is very difficult to change peoples’ core beliefs and values. Thus, it is important to detect disinformation as early as possible before it spreads, and to provide automatic fact checks to content moderators for this purpose.

3. Better understanding the context of certain conversations helps us to put things into perspective, especially when it comes to social media. Could you tell us a little bit about your research project EXPANSE, that aims to explain attitudes on social media?

EXPANSE is a research leader fellowship I recently obtained (more information here: https://dff.dk/en/grants/copy_of_research-leaders-2020/researchleader-14).The project itself started on October 1, 2021, so there are unfortunately not so many results to share yet. Very briefly about the goals of the project though: the overarching aim is to be able to detect attitudes automatically (also called stance detection), but do so in a much more fine-grained and transparent way than is possible today. The key innovation is to imbue stance detection models with sociological knowledge, as I hypothesize they can shed a light on why people hold certain attitudes, and thus lead to more insightful automatically generated explanations.

4. Content moderation is a growing concern, given the recent infodemic. However, some criticize it as a means to suppress freedom of speech. How should content moderation companies position themselves? What are some of the areas they should focus on, to ensure online safety?

The concept of freedom of speech has existed since the 6th century BC, long before social media or even print media were invented. Before social media, it was much more challenging to spread and weaponize information — whereas now, everyone with access to the internet can do so, anonymously and with few repercussions. This format, by design, brings out the worst in people — things people would never feel comfortable saying to someone’s face, they feel comfortable writing in an anonymous online forum. The filter bubble effect means people additionally receive backing on their opinions from like-minded individuals. This means, in this day and age, one needs to very carefully weigh freedom of speech up against the real harms it can cause. One area I find particularly concerning is the negative impact of social media, especially image-sharing platforms, on depression and suicide among teenagers due to the distorted views of reality presented by many users on these platforms, including related to body image and lifestyle. I think a careful audit of such platforms is needed to address this problem more holistically, but content moderation can at least help to identify particularly harmful information, such as posts glorifying anorexia.

5. How can AI be applied to help with the problem? How does AI explainability help?

AI-based content moderation solutions can help identify harmful content before it even reaches users. Explainable AI can be useful in two ways: one, it can help continually improve content moderation models by identifying why they sometimes make mistakes, and two, especially for knowledge-intensive tasks such as automatic fact checking, they can provide content moderators with more information about why a model arrived at a prediction to make it easier to manually verify if the prediction is reliable.

6. The news is full of how bad actors propagate misinformation and also how platforms seem to exacerbate the problem. What role do you see for academics in addressing these problems?

Academics can provide crucial insights into why this phenomenon occurs, as well as potential solutions to the problem. For misinformation specifically, academics from many different disciplines have important and complementary research findings, which should be taken into account — e.g. from psychology, about the perception of misinformation; from computer science, about how to develop automatic content moderation solutions; from law, about how legislation applies to online platforms in different countries.
Academics can thus help inform online platform developers on how to make platforms safer for everyone, content moderation companies on how to automatically detect harmful content, and, perhaps most crucially, decision makers in governments on how to develop new legislation related to online harms.

PS. Something to look out for –

Isabelle’s higher doctoral dissertation defence, for earning the title of Doctor Scientiarum, on “Towards Explainable Fact Checking” on 6 December: https://di.ku.dk/begivenhedsmappe/begivenheder-2021/doctorate-defence-by-isabelle-augenstein/

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

The Future of AI-Powered Content Moderation: Careers and Opportunities

As companies are grappling with the challenge of ensuring user safety and creating a welcoming environment: AI-powered content moderation has emerged as a powerful solution, revolutionizing the way organizations approach this task. In this article, we will explore the careers and opportunities that AI-powered content moderation presents, and how individuals and businesses can adapt to…
6 minutes

The Impact of Trust and Safety in Marketplaces

Nowadays, its no surprise that an unregulated marketplace with sketchy profiles, violent interactions, scams, and illegal products is doomed to fail. In the current world of online commerce, trust and safety are essential, and if users don't feel comfortable, they won’t buy. As a marketplace owner, ensuring that your platform is a safe and reliable…
9 minutes

How AI is Revolutionizing Content Moderation in Social Media Platforms

Social media platforms have become an integral part of our lives, connecting us with friends, family, and the world at large. Still, with the exponential growth of user-generated content, ensuring a safe and positive user experience has become a daunting task. This is where Artificial Intelligence (AI) comes into play, revolutionizing the way social media…
3 minutes

Customizing AI Content Moderation for Different Industries and Platforms

With the exponential growth of user-generated content across various industries and platforms, the need for effective and tailored content moderation solutions has never been more apparent. Artificial Intelligence (AI) plays a major role in automating content moderation processes, but customization is key to address the unique challenges faced by different industries and platforms. Understanding Industry-Specific…
3 minutes

Emerging Threats in AI Content Moderation : Deep Learning and Contextual Analysis 

With the rise of user-generated content across various platforms, artificial intelligence (AI) has played a crucial role in automating the moderation process. However, as AI algorithms become more sophisticated, emerging threats in content moderation are also on the horizon. This article explores two significant challenges: the use of deep learning and contextual analysis in AI…
4 minutes

The Impact of AI Content Moderation on User Experience and Engagement

User experience and user engagement are two critical metrics that businesses closely monitor to understand how their products, services, or systems are being received by customers. Now that user-generated content (UGC) is on the rise, content moderation plays a main role in ensuring a safe and positive user experience. Artificial intelligence (AI) has emerged as…
4 minutes

Future Technologies : The Next Generation of AI in Content Moderation 

With the exponential growth of user-generated content on various platforms, the task of ensuring a safe and compliant online environment has become increasingly complex. As we look toward the future, emerging technologies, particularly in the field of artificial intelligence (AI), are poised to revolutionize content moderation and usher in a new era of efficiency and…
3 minutes

Global Perspective : How AI Content Moderation Differs Across Cultures and Religion

The internet serves as a vast platform for the exchange of ideas, information, and opinions. However, this free exchange also brings challenges, including the need for content moderation to ensure that online spaces remain safe and respectful. As artificial intelligence (AI) increasingly plays a role in content moderation, it becomes essential to recognize the cultural…
5 minutes

Ethical Consideration in AI Content Moderation : Avoiding Censorship and Biais

Artificial Intelligence has revolutionized various aspects of our lives, including content moderation on online platforms. As the volume of digital content continues to grow exponentially, AI algorithms play a crucial role in filtering and managing this content. However, with great power comes great responsibility, and the ethical considerations surrounding AI content moderation are becoming increasingly…
3 minutes

The Psychology Behind AI Content Moderation: Understanding User Behavior

Social media platforms are experiencing exponential growth, with billions of users actively engaging in content creation and sharing. As the volume of user-generated content continues to rise, the challenge of content moderation becomes increasingly complex. To address this challenge, artificial intelligence (AI) has emerged as a powerful tool for automating the moderation process. However, user…
5 minutes

How to Launch a Successful Career in Trust and Safety‍

Before diving into the specifics of launching a career in Trust and Safety, it's important to have a clear understanding of what this field entails. Trust and Safety professionals are responsible for maintaining a safe and secure environment for users on digital platforms. This includes identifying and addressing harmful content, developing policies to prevent abuse,…
5 minutes

‍The Future of Dating: Embracing Video to Connect and Thrive

In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In…
4 minutes

How Predators Are Abusing Generative AI

The recent rise of generative AI has revolutionized various industries, including Trust and Safety. However, this technological advancement generates new problems. Predators have found ways to abuse generative AI, using it to carry out horrible acts such as child sex abuse material (CSAM), disinformation, fraud, and extremism. In this article, we will explore how predators…
4 minutes

What is Content Moderation: a Guide

Content moderation is one of the major aspect of managing online platforms and communities. It englobes the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. In this article, we'll provide you with a comprehensive glossary to understand the key concepts, as well as its definition, challenges and…
15 minutes

What is Doxxing: A Comprehensive Guide to Protecting Your Online Privacy

Today, protecting our online privacy has become increasingly important. One of the most concerning threats we face is doxxing. Derived from the phrase "dropping documents," doxxing refers to the act of collecting and exposing an individual's private information, with the intention of shaming, embarrassing, or even endangering them. This malicious practice has gained traction in…
7 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert