fbpx

Expert’s Corner with CEO of Recluse Laboratories Andrew Johnston

Our expert this month is ex-FBIer Andrew Johnston. Andrew is the CEO and co-founder of Recluse Laboratories. Drawing on twelve years of experience in cybersecurity, Andrew has worked with clients both in an incident response capacity as well as in proactive services. In addition to his private sector work, at the Federal Bureau of Investigation, Andrew served in the Cyber and Counterterrorism divisions where he performed field work and provided technical expertise to criminal and national security investigations.

1. What was your motivation behind starting Recluse Labs?

We started Recluse Labs to solve problems endemic to the threat intelligence industry. Threat intelligence, whether conducted in the private industry, academia, or government is reliant on a set of highly skilled individuals. These individuals are tasked with gaining access to adversarial communities, building believable identities, and extracting actionable information to form intelligence. Such individuals are few are far between, and the effect is that threat intelligence is often incredibly limited.

We’re passionate about cybersecurity and data science, and we believed that the combination of the two could enable us to have a far greater reach than organizations significantly larger than us. Since then, we’ve been working with industry peers, law enforcement, and intelligence experts to develop an automated, AI-enabled platform for collecting and analyzing intelligence.

2. Are they specific patterns that online platforms should be mindful while tracking terrorist groups?

One of the more interesting patterns is the mobility of many terrorist groups from one platform to another. In the past few years, there has been plenty of media coverage of up-and-coming social media platforms being swarmed with profiles promoting ISIL and other terrorist groups. Oftentimes, this creates significant challenges for a nascent platform, especially those that aim to have less restrictive moderation than some of the major players. It is worth noting that a strategy of aggressive banning doesn’t appear to be effective; terrorists have become used to profiles becoming disposable and regularly creating new accounts.

Consequently, the best approach to tracking and disrupting terrorist use of a platform has to occur at the content level. Naturally, simple solutions such as banning a set of words doesn’t scale well especially for a platform that caters to a multilingual audience. Likewise, human-centric approaches simply can’t scale to handle the volume and velocity of content that a healthy social media generates on a regular basis. Multilingual machine learning solutions are really the only answer to this problem that can both meet the scale and effectively identify novel terrorist content. We’ve dedicated a lot of research into developing terrorist content recognition systems that can meet the needs of social platforms, governments, and researchers.

3. Quite recently, a known terrorist group took control of the Afghanistan government, i.e. the Taliban. What should the platform’s stance be on it?

This is a hard question to navigate, as the answer will likely vary greatly depending on the platform’s philosophy. There is merit to the concept that the Taliban are a significant geopolitical force and denying their content as a policy hinders people from seeing the whole story. Moreover, hosting such content gives other users an opportunity to criticize, fact-check, and analyze the content, which could enable users who would otherwise be influenced by the propaganda to see the counterargument on the same screen.

Conversely, hosting such content means having to have very clear guidelines on when such content crosses the line to the point of being unacceptable. Left unchecked, those groups are known to publish disgusting or violent content that could alienate users and pose legal risks. Platforms then find themselves in the position of having to define what constitutes “terrorism” and merits removal. An improper definition could have side effects that impact benign marginalized groups.

In contrast, simply banning content promoting terror groups such as the Taliban keeps rules more clear, but doesn’t fully solve the problem. There are innumerable terror groups, and what precisely constitutes terrorism (and who is guilty of committing it) can be highly culturally specific.

Given that Recluse’s mission heavily involves building and deploying artificial intelligence to combat terrorism, we had to consider this question early on. We settled on targeting groups where there is a global consensus that they are terror groups. This definition ensures that we are never beholden to the political zeitgeist of any given country. Although this higher burden of consensus could inhibit us from targeting some groups we may personally find abhorrent, it ensures that we can operate without having to derive a “terrorist calculus” to evaluate every group.

4. Child grooming and trafficking can be hard to track online. The New York Times did a piece on instances where Facebook often categorizes minors as adults when unable to determine their age. What are your thoughts on this?

Identifying someone’s age is an incredibly difficult task — we all know someone who looks half their age. Consequently, even the best categorization system is going to have some level of error, regardless of how that system is implemented. That said, in the case of child exploitation, false positives and false negatives have significantly different impacts. Identifying potential child exploitation is paramount, whereas the penalty of a false negative is primarily additional workload for the moderation team. Of course, there is a balance to be had — a system with excessive errors is nearly as useless as having no system at all.

Thankfully, we don’t have to rely on our ability to identify minors as our sole tool against deplatforming child predators. Identifying and targeting language and behavior consistent with abusers can enable platforms to attack the problem at their source. In fusion with algorithms designed to identify minor users and child sexual abuse material, such techniques can better protect vulnerable groups even in the face of imprecise categorization.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Why moderation has become essential for UGC 

User-Generated Content (UGC) has become an integral part of online participation. Any type of material—whether it's text, photos, videos, reviews, or discussions—that is made and shared by people instead of brands or official content providers is called user-generated content. Representing variety and honesty, it is the online community's collective voice. Let's explore user-generated content (UGC)…
6 minutes

How to Protect Online Food Delivery Users: The Critical Role of Moderation

Nowadays, most people can’t remember the last time they called a restaurant and asked for their food to be delivered. In fact, most people can’t recall the last time they called a restaurant for anything. In this new era of convenience, food delivery has undergone a revolutionary transformation. What once involved a phone call to…
5 minutes

Navigating Relationships: Why Content Moderation Plays a Critical Role in Modern Dating

Since the invention of dating websites in 1995, the way potential partners meet and form relationships has changed completely. However, with this convenience comes the challenge of ensuring a safe and positive user experience, which becomes increasingly tedious and time-consuming as more users enter the platform. This is where AI content moderation comes in handy,…
4 minutes

How to Build a Trustworthy E-Commerce Brand Using AI-text Moderation

In the fast-paced and competitive world of online commerce, trust is the most important element in ensuring successful transactions, and customer evaluations hold a top spot in the ranking of factors that contribute to the development of brand reliability. They act as a kind of digital word-of-mouth, influencing consumers' choices to make purchases and moulding…
4 minutes

TikTok DSA Statement of Reasons (SOR) Statistics

What can we learn from TikTok Statements of Reasons? Body shaming, hypersexualisation, the spread of fake news and misinformation, and the glorification of violence are a high risk on any kind of Social Network. TikTok is one of the fastest growing between 2020 and 2023 and has million of content uploaded everyday on its platform.…
10 minutes

How to deal with Fake Dating Profiles on your Platform

Have you seen an increase in fake profiles on your platform? Are you concerned about it becoming a wild west? In this article, we’ll dive into how to protect users from encountering bad actors and create a safer environment for your customers. An Introduction to the Issue Dating apps have transformed the way people interact…
5 minutes

Building Trust and Safety Online: The Power of AI Content Moderation in Community Forums

Community forums are providing spaces for individuals to connect, share ideas, and build relationships. However, maintaining a safe and welcoming environment in these forums is crucial for fostering trust and ensuring the well-being of community members. To address this challenge, many forums are turning to the power of artificial intelligence (AI) content moderation. In this…
3 minutes

Moderation Strategies for Decentralised Autonomous Organisations (DAOs)

Decentralised Autonomous Organizations (DAOs) are a quite recent organisational structure enabled by blockchain technology. They represent a complete structural shift in how groups organise and make decisions, leveraging decentralised networks and smart contracts to facilitate collective governance and decision-making without a centralised authority. The concept of DAOs emerged in 2016 with the launch of "The…
6 minutes

How Content Moderation Can Save a Brand’s Reputation

Brand safety and perception have always been important factors to look out for in any organisation, but now, because we live in a world where social media and the internet play an essential role in the way we interact, that aspect has exponentially grown in importance. The abundance of user-generated content on different platforms offers…
5 minutes

How to Keep your Online Community Abuse-Free

The Internet & Community Building In the past, if you were really into something niche, finding others who shared your passion in your local area was tough. You might have felt like you were the only one around who had that particular interest. But things have changed a lot since then. Now, thanks to the…
6 minutes

9 Industries Benefiting from AI Content Moderation

As the internet becomes an integral part of people's lives, industries have moved towards having a larger online presence. Many businesses in these industries have developed online platforms where user-generated content (UGC) plays a major role. From the rise of online healthcare to the invention of e-learning, all of these promote interaction between parties through…
8 minutes

The Importance of Scalability in AI Content Moderation

Content moderation is essential to maintain a safe and positive online environment. With the exponential growth of user-generated content on various platforms, the need for scalable solutions has become crucial. Artificial Intelligence (AI) has emerged as a powerful tool in content moderation but addressing scalability is still a challenge. In this article, we will explore…
3 minutes

Misinformation Expert’s Corner : Preslav Nakov, AI and Fake News

Preslav Nakov has established himself as one of the leading experts on the use of AI against propaganda and disinformation. He has been very influential in the field of natural language processing and text mining, publishing hundreds of peer reviewed research papers. He spoke to us about his work dealing with the ongoing problem of…
8 minutes

AI Ethics Expert’s Corner : Kyle Dent, Head of AI Ethics

This month we’ve added a new “Expert’s Corner” feature starting with an interview with our own Kyle Dent, who recently joined Checkstep. He answers questions about AI ethics and some of the challenges of content moderation. AI Ethics FAQ with Kyle Dent If you would like to catch up on other thought leadership pieces by…
4 minutes

Expert’s Corner with Checkstep CEO Guillaume Bouchard

This month’s expert is Checkstep’s CEO and Co-Founder Guillaume Bouchard. After exiting his previous company, Bloomsbury AI to Facebook, he’s on a mission to better prepare online platforms against all types of online harm. He has a PhD in applied mathematics and machine learning from INRIA, France. 12 years of scientific research experience at Xerox…
3 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert