Blog //

Understanding AI Systems and Obligations under the EU AI Act

The EU AI Act (EU Regulation 2024/1689) came into force on 1 August 2024, setting up a comprehensive framework to regulate AI systems based on their risks. Its main aim is to protect the rights and safety of EU citizens while encouraging innovation.

Understanding AI Systems and Obligations under the EU AI Act

In our last post we explored the new roles introduced by the EU AI Act and key dates to consider.

In this post, we’ll take a closer look at the Act’s risk-based approach to regulating AI, exploring the five key categories of AI systems and the obligations that come with each.

spacer

The Five Categories of AI Systems

  1. Prohibited AI
  2. High-Risk AI
  3. Limited-Risk AI
  4. Minimal-Risk AI
  5. General-Purpose AI

Each of these categories comes with its own set of requirements to ensure that AI is developed and used responsibly.

spacer

Prohibited AI: What’s Off-Limits?

Some AI systems are entirely banned due to their unacceptable risk to individuals and society. These include:

  • Subliminal, manipulative, or deceptive techniques: AI that distorts behaviour, leading to significant harm.
  • Social scoring systems: AI that evaluates individuals based on their personal behaviour or traits, potentially resulting in unfair treatment.

For a full list of banned AI systems, check out the detailed provisions in the Act.

spacer

High-Risk AI: Striking a Balance

High-risk AI systems are those that have a significant impact on the health, safety, or fundamental rights of EU citizens. Some examples include:

  • AI used in safety-critical products (e.g., medical devices, transportation systems)
  • AI in recruitment, credit scoring, or public safety

Key Requirements for High-Risk AI:

  • Third-party conformity assessments to ensure systems are compliant
  • Strict transparency and accountability measures

The Act includes some provisions that allow for lower risk classifications in specific cases. A database of high-risk AI systems will also be created to make it easier to check compliance before deployment.

Pro Tip: Most of the obligations in the EU AI Act apply to high-risk AI systems, so understanding this category is crucial for compliance.

spacer

Limited-Risk AI: Focusing on Transparency

Limited-risk AI systems mainly raise concerns about transparency. Examples include:

  • Chatbots interacting with EU citizens
  • AI generating synthetic content (e.g., text, images, videos, audio)
  • AI used to create deepfakes or generate public information

Transparency Obligations:

  • Users must be informed when they’re interacting with AI
  • It should be clear when content has been artificially generated or altered

These transparency measures aim to keep things clear and build trust with users, ensuring they make informed decisions when interacting with AI.

spacer

Minimal-Risk AI: Everyday AI Systems

Minimal-risk AI systems are typically simple, low-impact tools, like basic automation or apps that don’t interact with EU citizens.

Pro Tip: Many everyday AI applications fall into this category, so it’s the least burdensome in terms of compliance requirements.

spacer

General-Purpose AI: Managing Systemic Risks

General-purpose AI (GPAI) systems, like ChatGPT, Siri, Google Assistant, Alexa, and Google Translate, are versatile and have broad applications. Due to their widespread use, the Act imposes specific obligations on GPAI providers, particularly around systemic risks.

Key Concerns for GPAI:

  • These systems can cause significant accidents or cyber attacks if misused
  • Providers must manage the risks tied to the scale and flexibility of these systems
spacer

Exemptions: When the Rules Don’t Apply

Some AI systems are excluded from the Act’s scope, including those:

  • Used exclusively for scientific research and development
  • Designed for personal, non-commercial use
  • Tested in controlled environments (e.g., labs)
  • Released under open-source licenses, provided they’re not prohibited or high-risk systems
spacer

The European Commission has recently published Guidance on the definition of an ‘AI system’. (Only those systems that fall under the definition of AI system fall within the scope of the EU AI Act). The definition comprises seven main elements:

1.      a machine-based system;

2.     that is designed to operate with varying levels of autonomy;

3.     that may exhibit adaptiveness after deployment;

4.     and that, for explicit or implicit objectives;

5.     infers, from the input it receives, how to generate outputs

6.     such as predictions, content, recommendations, or decisions

7.     that can influence physical or virtual environments.

The Guidance expands upon each of these with the aim of providing non-binding advice to assist organisations determine whether they fall within the scope of the Act.

The European Commission has also recently published Guidance on prohibited AI practices. The EU AI Act prohibits the “placing on the EU market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values”.

The eight prohibited practices identified in the EU AI Act are as follows:

1.      Harmful manipulation and deception

2.     Harmful exploitation of vulnerabilities

3.     Social Scoring

4.     Individual criminal offence risk assessment and prediction

5.     Untargeted scraping to develop facial recognition databases

6.     Emotion recognition

7.     Biometric categorisation

8.     Real-time remote biometric identification

The Guidance expands upon each of these with the aim of providing non-binding advice to assist organisations determine whether they fall within the scope of the Act.

All organisations that develop AI (‘Providers’) or use AI (‘Deployers’) should be identify whether what they have comes within the scope of the EU AIU Act and ensure that it is not used for a prohibited practice.

Conclusion

Staying Ahead of Compliance

The EU AI Act takes a risk-based approach, meaning not all AI systems are treated the same way. The obligations vary depending on the classification of the system. By understanding which category your AI system falls into and what’s required, you can be better prepared for compliance.

Check out our next post, where we’ll take a deeper dive into the specific obligations for deployers using high-risk AI systems.