fbpx

Expert’s Corner with CEO of Recluse Laboratories Andrew Johnston

Our expert this month is ex-FBIer Andrew Johnston. Andrew is the CEO and co-founder of Recluse Laboratories. Drawing on twelve years of experience in cybersecurity, Andrew has worked with clients both in an incident response capacity as well as in proactive services. In addition to his private sector work, at the Federal Bureau of Investigation, Andrew served in the Cyber and Counterterrorism divisions where he performed field work and provided technical expertise to criminal and national security investigations.

1. What was your motivation behind starting Recluse Labs?

We started Recluse Labs to solve problems endemic to the threat intelligence industry. Threat intelligence, whether conducted in the private industry, academia, or government is reliant on a set of highly skilled individuals. These individuals are tasked with gaining access to adversarial communities, building believable identities, and extracting actionable information to form intelligence. Such individuals are few are far between, and the effect is that threat intelligence is often incredibly limited.

We’re passionate about cybersecurity and data science, and we believed that the combination of the two could enable us to have a far greater reach than organizations significantly larger than us. Since then, we’ve been working with industry peers, law enforcement, and intelligence experts to develop an automated, AI-enabled platform for collecting and analyzing intelligence.

2. Are they specific patterns that online platforms should be mindful while tracking terrorist groups?

One of the more interesting patterns is the mobility of many terrorist groups from one platform to another. In the past few years, there has been plenty of media coverage of up-and-coming social media platforms being swarmed with profiles promoting ISIL and other terrorist groups. Oftentimes, this creates significant challenges for a nascent platform, especially those that aim to have less restrictive moderation than some of the major players. It is worth noting that a strategy of aggressive banning doesn’t appear to be effective; terrorists have become used to profiles becoming disposable and regularly creating new accounts.

Consequently, the best approach to tracking and disrupting terrorist use of a platform has to occur at the content level. Naturally, simple solutions such as banning a set of words doesn’t scale well especially for a platform that caters to a multilingual audience. Likewise, human-centric approaches simply can’t scale to handle the volume and velocity of content that a healthy social media generates on a regular basis. Multilingual machine learning solutions are really the only answer to this problem that can both meet the scale and effectively identify novel terrorist content. We’ve dedicated a lot of research into developing terrorist content recognition systems that can meet the needs of social platforms, governments, and researchers.

3. Quite recently, a known terrorist group took control of the Afghanistan government, i.e. the Taliban. What should the platform’s stance be on it?

This is a hard question to navigate, as the answer will likely vary greatly depending on the platform’s philosophy. There is merit to the concept that the Taliban are a significant geopolitical force and denying their content as a policy hinders people from seeing the whole story. Moreover, hosting such content gives other users an opportunity to criticize, fact-check, and analyze the content, which could enable users who would otherwise be influenced by the propaganda to see the counterargument on the same screen.

Conversely, hosting such content means having to have very clear guidelines on when such content crosses the line to the point of being unacceptable. Left unchecked, those groups are known to publish disgusting or violent content that could alienate users and pose legal risks. Platforms then find themselves in the position of having to define what constitutes “terrorism” and merits removal. An improper definition could have side effects that impact benign marginalized groups.

In contrast, simply banning content promoting terror groups such as the Taliban keeps rules more clear, but doesn’t fully solve the problem. There are innumerable terror groups, and what precisely constitutes terrorism (and who is guilty of committing it) can be highly culturally specific.

Given that Recluse’s mission heavily involves building and deploying artificial intelligence to combat terrorism, we had to consider this question early on. We settled on targeting groups where there is a global consensus that they are terror groups. This definition ensures that we are never beholden to the political zeitgeist of any given country. Although this higher burden of consensus could inhibit us from targeting some groups we may personally find abhorrent, it ensures that we can operate without having to derive a “terrorist calculus” to evaluate every group.

4. Child grooming and trafficking can be hard to track online. The New York Times did a piece on instances where Facebook often categorizes minors as adults when unable to determine their age. What are your thoughts on this?

Identifying someone’s age is an incredibly difficult task — we all know someone who looks half their age. Consequently, even the best categorization system is going to have some level of error, regardless of how that system is implemented. That said, in the case of child exploitation, false positives and false negatives have significantly different impacts. Identifying potential child exploitation is paramount, whereas the penalty of a false negative is primarily additional workload for the moderation team. Of course, there is a balance to be had — a system with excessive errors is nearly as useless as having no system at all.

Thankfully, we don’t have to rely on our ability to identify minors as our sole tool against deplatforming child predators. Identifying and targeting language and behavior consistent with abusers can enable platforms to attack the problem at their source. In fusion with algorithms designed to identify minor users and child sexual abuse material, such techniques can better protect vulnerable groups even in the face of imprecise categorization.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

Unmasking Fake Dating Sites: How to Spot and Avoid Scams

In today's digital age, online dating has become increasingly popular, especially with the COVID-19 pandemic limiting traditional in-person interactions. Unfortunately, scammers have taken advantage of this trend, creating fake dating sites to exploit vulnerable individuals. These fraudulent platforms not only deceive users but also put their personal information and finances at risk. In this article,…
5 minutes

How to Build a Trustworthy E-Commerce Brand Using AI-text Moderation

In the fast-paced and competitive world of online commerce, trust is the most important element in ensuring successful transactions, and customer evaluations hold a top spot in the ranking of factors that contribute to the development of brand reliability. They act as a kind of digital word-of-mouth, influencing consumers' choices to make purchases and moulding…
4 minutes

What is Doxxing: A Comprehensive Guide to Protecting Your Online Privacy

Today, protecting our online privacy has become increasingly important. One of the most concerning threats we face is doxxing. Derived from the phrase "dropping documents," doxxing refers to the act of collecting and exposing an individual's private information, with the intention of shaming, embarrassing, or even endangering them. This malicious practice has gained traction in…
7 minutes

How to Build a Safe Social Media Platform without Sacrificing the User’s Freedom

It was once unthinkable that social media would become an integral aspect of daily life, but here we are, relying on it for communication, information, entertainment, and even shaping our social interactions. It’s brought to our lives a whole new set of rules, and now that online duality is expected, the balance between safety and…
6 minutes

How to Launch a Successful Career in Trust and Safety‍

Before diving into the specifics of launching a career in Trust and Safety, it's important to have a clear understanding of what this field entails. Trust and Safety professionals are responsible for maintaining a safe and secure environment for users on digital platforms. This includes identifying and addressing harmful content, developing policies to prevent abuse,…
5 minutes

TikTok DSA Statement of Reasons (SOR) Statistics

What can we learn from TikTok Statements of Reasons? Body shaming, hypersexualisation, the spread of fake news and misinformation, and the glorification of violence are a high risk on any kind of Social Network. TikTok is one of the fastest growing between 2020 and 2023 and has million of content uploaded everyday on its platform.…
10 minutes

How Predators Are Abusing Generative AI

The recent rise of generative AI has revolutionized various industries, including Trust and Safety. However, this technological advancement generates new problems. Predators have found ways to abuse generative AI, using it to carry out horrible acts such as child sex abuse material (CSAM), disinformation, fraud, and extremism. In this article, we will explore how predators…
4 minutes

How to deal with Fake Dating Profiles on your Platform

Have you seen an increase in fake profiles on your platform? Are you concerned about it becoming a wild west? In this article, we’ll dive into how to protect users from encountering bad actors and create a safer environment for your customers. An Introduction to the Issue Dating apps have transformed the way people interact…
5 minutes

‍The Future of Dating: Embracing Video to Connect and Thrive

In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In…
4 minutes

Building Trust and Safety Online: The Power of AI Content Moderation in Community Forums

Community forums are providing spaces for individuals to connect, share ideas, and build relationships. However, maintaining a safe and welcoming environment in these forums is crucial for fostering trust and ensuring the well-being of community members. To address this challenge, many forums are turning to the power of artificial intelligence (AI) content moderation. In this…
3 minutes

Expert’s Corner with Head of Research Isabelle Augenstein

This month we were very happy to sit down with one of the brains behind Checkstep who is also a recognized talent among European academics. She is the co-head of research at Checkstep and also an associate professor at the University of Copenhagen. She currently holds a prestigious DFF Sapere Aude Research Leader fellowship on ‘Learning to…
5 minutes

What is Content Moderation: a Guide

Content moderation is one of the major aspect of managing online platforms and communities. It englobes the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. In this article, we'll provide you with a comprehensive glossary to understand the key concepts, as well as its definition, challenges and…
15 minutes

Moderation Strategies for Decentralised Autonomous Organisations (DAOs)

Decentralised Autonomous Organizations (DAOs) are a quite recent organisational structure enabled by blockchain technology. They represent a complete structural shift in how groups organise and make decisions, leveraging decentralised networks and smart contracts to facilitate collective governance and decision-making without a centralised authority. The concept of DAOs emerged in 2016 with the launch of "The…
6 minutes

Expert’s Corner with Community Building Expert Todd Nilson

Checkstep interviews expert in online community building Todd Nilson leads transformational technology projects for major brands and organizations. He specializes in online communities, digital workplaces, social listening analysis, competitive intelligence, game thinking, employer branding, and virtual collaboration. Todd has managed teams and engagements with national and global consultancy firms specialized in online communities and the…
7 minutes

17 Questions Trust and Safety Leaders Should Be Able to Answer 

A Trust and Safety leader plays a crucial role in ensuring the safety and security of a platform or community. Here are 17 important questions that a Trust and Safety leader should be able to answer.  What are the key goals and objectives of the Trust and Safety team? The key goals of the Trust…
6 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert