fbpx

European Parliament Approves the AI Act

What will EU regulation of AI-based products and services look like?

As the hype around artificial intelligence continues to grow, the European Union takes a crucial step towards the world’s first attempt at AI regulation. The European Parliament has approved the current draft of the legislation known as the AI Act.

This is our first look at what regulation of AI-based products and services could look like, though there are details yet to be worked out. Among the headline provisions are outright bans on the use of live facial recognition technology and predictive policing.

Facial recognition software has been legal worldwide, with just two countries (Belgium and Luxembourg) ever having banned it. Its use is often associated with China, representing a critical piece to its larger “social credit” projectPredictive policing is also currently being used in the United States, United Kingdom, Denmark, Japan, China, and more as the list looks likely to grow. It’s an approach that uses personal data (such as past convictions, location, and group affiliations) to predict future behaviour. The AI Act’s ban on these technologies was welcomed by groups such as Amnesty International but didn’t come without a fight.

Another focus of the AI Act was generative AI applications such as ChatGPT. The growing popularity of this technology was clearly prevalent in the minds of legislators. The approved draft text places obligations on these applications, including the labelling of AI-generated media and reporting what copyrighted material was used to train underlying models. Furthermore, there is a general requirement that every step in training AI models abides by all European laws. Contravention of these provisions risks deletion of the offending application or a fine of up to 7% of revenue.

This legislation arrives at a time when the conversation around AI is split between its innovative potential and possible regulation. Lawmakers in much of the rest of the world are still unsure about how this legal framework would come together. The EU, on the other hand, has progressed at a pace that even those who welcomed regulation are weary of.

Some of those concerns are reportedly technical, with the copyrighted materials requirement particularly standing out as “impossible” to comply with. Civil society organisations are also concerned with some of the AI Act’s current blindspots pertaining to human rights.

What’s Next?

The draft legislation is now set to be negotiated further among the European Commission and Council, including the heads of state and government leaders of European countries. Though the bloc is hoping to reach an agreement by the end of the year, there’s a lot to figure out. Questions include, but are most certainly not limited to:

  • How, specifically, will the AI Act be enforced?
  • What will the final list of High-Risk Systems look like?
  • What are the rules governing the interplay between the AI Act and, say, GDPR?
  • How do individuals seek redress, as the AI Act currently states, if they determine they were harmed by in-scope services? Furthermore, how would they even know they were harmed, to begin with?

More posts like this

We want content moderation to enhance your users’ experience and so they can find their special one more easily.

How to Protect the Mental Health of Content Moderators? 

Content moderation has become an essential aspect of managing online platforms and ensuring a safe user experience. Behind the scenes, content moderators play a crucial role in reviewing user-generated content, filtering out harmful or inappropriate materials, and upholding community guidelines. However, the task of content moderation is not without its challenges, as it exposes moderators
4 minutes

Expert’s Corner with NLP and Misinformation Expert Preslav Nakov

Preslav Nakov has established himself as one of the leading experts on the use of AI against propaganda and disinformation. He has been very influential in the field of natural language processing and text mining, publishing hundreds of peer reviewed research papers. He spoke to us about his work dealing with the ongoing problem of
8 minutes

Expert’s Corner with Head of AI Ethics Kyle Dent

This month we’ve added a new “Expert’s Corner” feature starting with an interview with our own Kyle Dent, who recently joined Checkstep. He answers questions about AI ethics and some of the challenges of content moderation. 1. With your extensive work around AI ethics, how would you address the topic of efficiency & AI? Particularly when
4 minutes

The Evolution of Content Moderation Rules Throughout The Years

The birth of the digital public sphere This article is contributed by Ahmed Medien. Online forums and social marketplaces have become a large part of the internet in the past 20 years since the early bulletin boards on the internet and AOL chat rooms. Today, users moved primarily to social platforms, platforms that host user-generated content. These
7 minutes

Conference for Truth and Trust Online Wrapped Up

Combating misleading information, hoaxes, half-truths and outright lies is a discouraging business and can feel like a losing battle. However, many researchers and media experts continue to keep up the fight. This past October a group of technologists, academics, and platform owners got together for the second meeting of the Conference for Truth and Trust Online to
8 minutes

What is Content Moderation? 

Content moderation is the strategic process of evaluating, filtering, and regulating user-generated content on digital ecosystems. It plays a crucial role in fostering a safe and positive user experience by removing or restricting content that violates community guidelines, is harmful, or could offend users. An effective content moderation system is designed to strike a delicate
5 minutes

Transforming Text Moderation with Content Moderation AI

In today's interconnected world, text-based communication has become a fundamental part of our daily lives. However, with the exponential growth of user-generated text content on digital platforms, ensuring a safe and inclusive online environment has become a daunting task. Text moderation plays a critical role in filtering and managing user-generated content to prevent harmful or
4 minutes

Streamline Audio Moderation with the Power of AI

In today's digitally-driven world, audio content has become an integral part of online platforms, ranging from podcasts and audiobooks to user-generated audio clips on social media. With the increasing volume of audio content being generated daily, audio moderation has become a critical aspect of maintaining a safe and positive user experience. Audio moderation involves systematically
4 minutes

It’s Scale or Fail with AI in Video Moderation

In the digital age, video content has become a driving force across online platforms, shaping the way we communicate, entertain, and share experiences. With this exponential growth, content moderation has become a critical aspect of maintaining a safe and inclusive online environment. The sheer volume of user-generated videos poses significant challenges for platforms, necessitating advanced
4 minutes

Enable and Scale AI for Podcast Moderation

The podcasting industry has experienced an explosive growth in recent years, with millions of episodes being published across various platforms every day. As the volume of audio content surges, ensuring a safe and trustworthy podcast environment becomes a paramount concern. Podcast moderation plays a crucial role in filtering and managing podcast episodes to prevent the
4 minutes

Ready or Not, AI Is Coming to Content Moderation

As digital platforms and online communities continue to grow, content moderation becomes increasingly critical to ensure safe and positive user experiences. Manual content moderation by human moderators is effective but often falls short when dealing with the scale and complexity of user-generated content. Ready or not, AI is coming to content moderation operations, revolutionizing the
5 minutes

What is Doxxing: A Comprehensive Guide to Protecting Your Online Privacy

Today, protecting our online privacy has become increasingly important. One of the most concerning threats we face is doxxing. Derived from the phrase "dropping documents," doxxing refers to the act of collecting and exposing an individual's private information, with the intention of shaming, embarrassing, or even endangering them. This malicious practice has gained traction in
7 minutes

The Role of a Content Moderator: Ensuring Safety and Integrity in the Digital World

In today's digital world, the role of a content moderator is central to ensuring the safety and integrity of online platforms. Content moderators are responsible for reviewing and moderating user-generated content to ensure that it complies with the platform's policies and guidelines, and the laws and regulations. Their work is crucial in creating a safe
5 minutes

Unmasking Fake Dating Sites: How to Spot and Avoid Scams

In today's digital age, online dating has become increasingly popular, especially with the COVID-19 pandemic limiting traditional in-person interactions. Unfortunately, scammers have taken advantage of this trend, creating fake dating sites to exploit vulnerable individuals. These fraudulent platforms not only deceive users but also put their personal information and finances at risk. In this article,
5 minutes

Fake Dating Pictures: A Comprehensive Guide to Identifying and Managing 

In the world of online dating, fake dating pictures are harmful, as pictures play a crucial role in making a strong first impression. However, not all dating pictures are created equal. There is a growing concern about fake profiles using deceptive or doctored images.  To navigate the online dating landscape successfully, it's important to know
5 minutes

Prevent unwanted content from reaching your platform

Speak to one of our experts and learn about using AI to protect your platform
Talk to an expert