The EU Artificial Intelligence Act (AI Act) introduces, and outlines distinct responsibilities for, two pivotal roles in the AI lifecycle: providers and deployers. Identifying your organisation’s role as a provider or deployer is essential to navigate this complex regulatory landscape effectively. In this blog we break down these new roles in four parts:
- Key Distinctions Between Providers and Deployers
- Obligations for Providers and Deployers of High-Risk AI Systems
- Obligations for General-Purpose AI Models
- Compliance Strategy for Providers and Deployers
Part 1: Key Distinctions Between Providers and Deployers
Providers
A provider is any entity that develops or commissions an AI system or general-purpose AI model and places it on the market under its brand, regardless of cost.
Providers fall within the AI Act’s scope if they:
- Place AI systems or general-purpose models on the EU market, regardless of their location.
- Operate outside the EU but have AI outputs used within the EU.
Providers bear responsibility for the compliance and safety of their AI systems. Even if development is outsourced, providers remain accountable for meeting regulatory requirements.
Deployers
A deployer uses an AI system within its operations, excluding personal, non-commercial activities. Deployers within the scope include entities:
- Based in the EU.
- Operating outside the EU with outputs used in the EU.
Deployers must ensure AI systems are used safely and comply with the AI Act, especially since they often engage directly with end-users.
Why These Definitions Matter
The distinction between providers and deployers is significant since most regulatory obligations apply to providers. However, deployers also play a crucial role in the safe and ethical use of AI.
Part 2: Obligations for Providers and Deployers of High-Risk AI Systems
The AI Act takes a “risk-based approach,” applying specific requirements to high-risk AI systems—those in areas like public safety, biometrics, recruitment, essential infrastructure, and financial services.
Provider Obligations for High-Risk AI
Providers must conduct a conformity assessment to confirm the system’s compliance. Their obligations include:
- Risk management and data governance
- Technical documentation and record-keeping
- Transparency and human oversight
- Ensuring accuracy, robustness, and cyber security
Providers must also:
- Implement quality management and post-market monitoring systems.
- Take corrective action if risks to health, safety, or rights arise.
- Report serious incidents.
- Appoint an authorised EU representative if outside the EU.
- Ensure supplier contracts support regulatory compliance.
Deployer Obligations for High-Risk AI
Deployers have specific responsibilities for operating AI systems according to providers’ guidelines, such as:
- Assigning human oversight and logging usage.
- Informing affected users, conducting impact assessments, and explaining decisions.
- Reporting incidents and cooperating with authorities.
Common Obligations for Providers and Deployers
Both roles must promote AI literacy within their teams and fulfil transparency requirements to inform individuals interacting with AI systems.
Part 3: Obligations for General-Purpose AI Models
Only providers have obligations regarding general-purpose AI models, particularly when systemic risks are present. Providers must:
- Prepare detailed technical documentation and summaries on training content.
- Ensure copyright compliance.
- Report serious incidents and notify the European Commission if systemic risks emerge.
Deployers are not directly responsible for general-purpose models unless these models are embedded within a high-risk AI system.
Part 4: Compliance Strategy for Providers and Deployers
For both providers and deployers, clear contractual terms are critical to managing legal and reputational risks. Providers and deployers should:
- Define roles and obligations contractually to avoid ambiguity.
- Ensure supplier cooperation for compliance.
- Address liability for non-compliance and associated risks.
While deployers have fewer obligations, they must verify providers’ compliance and monitor AI system performance to reduce potential regulatory issues.
Conclusion
Preparing for Compliance Under the EU AI Act
The EU AI Act is set to transform AI regulation, placing detailed obligations on both providers and deployers to ensure that AI systems operate safely, transparently, and ethically. For providers, a proactive approach to system safety and compliance is critical. Deployers, though less burdened, have essential responsibilities to use AI safely and inform end-users adequately.
To navigate these requirements effectively, providers and deployers should establish clear contractual agreements, allocate liability, and work closely with suppliers to build a collaborative compliance framework. Promoting AI literacy within their teams also strengthens compliance by equipping staff to meet these obligations responsibly.
The EU AI Act is shaping a future of responsible, secure AI. By aligning practices with these standards, organisations can mitigate risk, enhance public trust, and set a benchmark for ethical AI innovation, positioning themselves as leaders in the evolving AI regulatory landscape.
Written by Robert Wassall
Robert Wassall is a solicitor, expert in data protection law and practice and a Data Protection Officer. As Head of Legal Services at NormCyber Robert heads up its Data Protection as a Service (DPaaS) solution and advises organisations across a variety of industries. Robert and his team support them in all matters relating to data protection and its role in fostering trusted, sustainable relationships with their clients, partners and stakeholders.