fbpx

Expert’s Corner with CEO of Recluse Laboratories Andrew Johnston

Our expert this month is ex-FBIer Andrew Johnston. Andrew is the CEO and co-founder of Recluse Laboratories. Drawing on twelve years of experience in cybersecurity, Andrew has worked with clients both in an incident response capacity as well as in proactive services. In addition to his private sector work, at the Federal Bureau of Investigation, Andrew served in the Cyber and Counterterrorism divisions where he performed field work and provided technical expertise to criminal and national security investigations.

1. What was your motivation behind starting Recluse Labs?

We started Recluse Labs to solve problems endemic to the threat intelligence industry. Threat intelligence, whether conducted in the private industry, academia, or government is reliant on a set of highly skilled individuals. These individuals are tasked with gaining access to adversarial communities, building believable identities, and extracting actionable information to form intelligence. Such individuals are few are far between, and the effect is that threat intelligence is often incredibly limited.

We’re passionate about cybersecurity and data science, and we believed that the combination of the two could enable us to have a far greater reach than organizations significantly larger than us. Since then, we’ve been working with industry peers, law enforcement, and intelligence experts to develop an automated, AI-enabled platform for collecting and analyzing intelligence.

2. Are they specific patterns that online platforms should be mindful while tracking terrorist groups?

One of the more interesting patterns is the mobility of many terrorist groups from one platform to another. In the past few years, there has been plenty of media coverage of up-and-coming social media platforms being swarmed with profiles promoting ISIL and other terrorist groups. Oftentimes, this creates significant challenges for a nascent platform, especially those that aim to have less restrictive moderation than some of the major players. It is worth noting that a strategy of aggressive banning doesn’t appear to be effective; terrorists have become used to profiles becoming disposable and regularly creating new accounts.

Consequently, the best approach to tracking and disrupting terrorist use of a platform has to occur at the content level. Naturally, simple solutions such as banning a set of words doesn’t scale well especially for a platform that caters to a multilingual audience. Likewise, human-centric approaches simply can’t scale to handle the volume and velocity of content that a healthy social media generates on a regular basis. Multilingual machine learning solutions are really the only answer to this problem that can both meet the scale and effectively identify novel terrorist content. We’ve dedicated a lot of research into developing terrorist content recognition systems that can meet the needs of social platforms, governments, and researchers.

3. Quite recently, a known terrorist group took control of the Afghanistan government, i.e. the Taliban. What should the platform’s stance be on it?

This is a hard question to navigate, as the answer will likely vary greatly depending on the platform’s philosophy. There is merit to the concept that the Taliban are a significant geopolitical force and denying their content as a policy hinders people from seeing the whole story. Moreover, hosting such content gives other users an opportunity to criticize, fact-check, and analyze the content, which could enable users who would otherwise be influenced by the propaganda to see the counterargument on the same screen.

Conversely, hosting such content means having to have very clear guidelines on when such content crosses the line to the point of being unacceptable. Left unchecked, those groups are known to publish disgusting or violent content that could alienate users and pose legal risks. Platforms then find themselves in the position of having to define what constitutes “terrorism” and merits removal. An improper definition could have side effects that impact benign marginalized groups.

In contrast, simply banning content promoting terror groups such as the Taliban keeps rules more clear, but doesn’t fully solve the problem. There are innumerable terror groups, and what precisely constitutes terrorism (and who is guilty of committing it) can be highly culturally specific.

Given that Recluse’s mission heavily involves building and deploying artificial intelligence to combat terrorism, we had to consider this question early on. We settled on targeting groups where there is a global consensus that they are terror groups. This definition ensures that we are never beholden to the political zeitgeist of any given country. Although this higher burden of consensus could inhibit us from targeting some groups we may personally find abhorrent, it ensures that we can operate without having to derive a “terrorist calculus” to evaluate every group.

4. Child grooming and trafficking can be hard to track online. The New York Times did a piece on instances where Facebook often categorizes minors as adults when unable to determine their age. What are your thoughts on this?

Identifying someone’s age is an incredibly difficult task — we all know someone who looks half their age. Consequently, even the best categorization system is going to have some level of error, regardless of how that system is implemented. That said, in the case of child exploitation, false positives and false negatives have significantly different impacts. Identifying potential child exploitation is paramount, whereas the penalty of a false negative is primarily additional workload for the moderation team. Of course, there is a balance to be had — a system with excessive errors is nearly as useless as having no system at all.

Thankfully, we don’t have to rely on our ability to identify minors as our sole tool against deplatforming child predators. Identifying and targeting language and behavior consistent with abusers can enable platforms to attack the problem at their source. In fusion with algorithms designed to identify minor users and child sexual abuse material, such techniques can better protect vulnerable groups even in the face of imprecise categorization.

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

17 Questions Trust and Safety Leaders Should Be Able to Answer 

A Trust and Safety leader plays a crucial role in ensuring the safety and security of a platform or community. Here are 17 important questions that a Trust and Safety leader should be able to answer.  What are the key goals and objectives of the Trust and Safety team? The key goals of the Trust
6 minutes

Expert’s Corner with Head of AI Ethics Kyle Dent

This month we’ve added a new “Expert’s Corner” feature starting with an interview with our own Kyle Dent, who recently joined Checkstep. He answers questions about AI ethics and some of the challenges of content moderation. 1. With your extensive work around AI ethics, how would you address the topic of efficiency & AI? Particularly when
4 minutes

Expert’s Corner with NLP and Misinformation Expert Preslav Nakov

Preslav Nakov has established himself as one of the leading experts on the use of AI against propaganda and disinformation. He has been very influential in the field of natural language processing and text mining, publishing hundreds of peer reviewed research papers. He spoke to us about his work dealing with the ongoing problem of
8 minutes

Expert’s Corner with Checkstep CEO Guillaume Bouchard

This month’s expert is Checkstep’s CEO and Co-Founder Guillaume Bouchard. After exiting his previous company, Bloomsbury AI to Facebook, he’s on a mission to better prepare online platforms against all types of online harm. He has a PhD in applied mathematics and machine learning from INRIA, France. 12 years of scientific research experience at Xerox
3 minutes

Expert’s Corner with Community Building Expert Todd Nilson

Checkstep interviews expert in online community building Todd Nilson leads transformational technology projects for major brands and organizations. He specializes in online communities, digital workplaces, social listening analysis, competitive intelligence, game thinking, employer branding, and virtual collaboration. Todd has managed teams and engagements with national and global consultancy firms specialized in online communities and the
7 minutes

Expert’s Corner with Head of Research Isabelle Augenstein

This month we were very happy to sit down with one of the brains behind Checkstep who is also a recognized talent among European academics. She is the co-head of research at Checkstep and also an associate professor at the University of Copenhagen. She currently holds a prestigious DFF Sapere Aude Research Leader fellowship on ‘Learning to
5 minutes

A Guide to Detect Fake User Accounts

Online social media platforms have become an major part of our daily lives: with the ability to send messages, share files, and connect with others, these networks provide a way, for us users, to stay connected. Those platforms are dealing with a rise of fake accounts and online fraudster making maintaining the security of their
4 minutes

Content Moderation: A Comprehensive Guide

Content moderation is a crucial aspect of managing online platforms and communities. It involves the review, filtering, and approval or removal of user-generated content to maintain a safe and engaging environment. To navigate this landscape effectively, it's essential to understand the terminology associated with content moderation. In this article, we'll delve into a comprehensive glossary
7 minutes

The Future of Dating: Embracing Video to Connect and Thrive

‍In a rapidly evolving digital landscape, dating apps are continually seeking innovative ways to enhance the user experience and foster meaningful connections. One such trend that has gained significant traction is the integration of video chat features. Video has emerged as a powerful tool to add authenticity, connectivity, and fun to the dating process. In
4 minutes

How Predators Are Abusing Generative AI

The recent rise of generative AI has revolutionized various industries, including Trust and Safety. However, this technological advancement generates new problems. Predators have found ways to abuse generative AI, using it to carry out horrible acts such as child sex abuse material (CSAM), disinformation, fraud, and extremism. In this article, we will explore how predators
4 minutes

How to Launch a Successful Career in Trust and Safety‍

Before diving into the specifics of launching a career in Trust and Safety, it's important to have a clear understanding of what this field entails. Trust and Safety professionals are responsible for maintaining a safe and secure environment for users on digital platforms. This includes identifying and addressing harmful content, developing policies to prevent abuse,
5 minutes

Unmasking Fake Dating Sites: How to Spot and Avoid Scams

In today's digital age, online dating has become increasingly popular, especially with the COVID-19 pandemic limiting traditional in-person interactions. Unfortunately, scammers have taken advantage of this trend, creating fake dating sites to exploit vulnerable individuals. These fraudulent platforms not only deceive users but also put their personal information and finances at risk. In this article,
5 minutes

The Role of a Content Moderator: Ensuring Safety and Integrity in the Digital World

In today's digital world, the role of a content moderator is central to ensuring the safety and integrity of online platforms. Content moderators are responsible for reviewing and moderating user-generated content to ensure that it complies with the platform's policies and guidelines, and the laws and regulations. Their work is crucial in creating a safe
5 minutes

Trust and Safety Teams: Ensuring User Protection in the Digital World

As the internet becomes an integral part of our daily lives, companies must prioritize the safety and security of their users. This responsibility falls on trust and safety teams, whose primary goal is to protect users from fraud, abuse, and other harmful behavior.  Trust and Safety Teams Objectives  The Role of Trust and Safety Teams
6 minutes

Fake Dating Pictures: A Comprehensive Guide to Identifying and Managing 

In the world of online dating, fake dating pictures are harmful, as pictures play a crucial role in making a strong first impression. However, not all dating pictures are created equal. There is a growing concern about fake profiles using deceptive or doctored images.  To navigate the online dating landscape successfully, it's important to know
5 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert