Deployers of High-Risk AI Systems: Navigating the EU AI Act
15 January 2025 // 4 Min Read
The EU AI Act (EU Regulation 2024/1689) has introduced some major changes in how AI is regulated across the European Union. In this third part of our series, we’ll take a closer look at what deployers of high-risk AI systems need to know – this category affects a lot of organisations.
As a deployer, you’re the one using an AI system, and your compliance responsibilities will depend on how and where high-risk AI is used.
What is a Deployer?
A deployer is an organisation or entity that uses AI systems under its control, whether that’s for operations, decision-making, or other activities. While many AI systems used by organisations fall under lower-risk categories, certain use cases involve high-risk AI, which means there are extra obligations to meet.
Examples of High-Risk AI Use by Deployers:
Recruitment and HR: AI systems in recruitment are becoming more common, and they might include tools that:
Target job adverts
Analyse and filter job applications
Evaluate candidates for hiring decisions
Allocate tasks or monitor performance based on individual behaviours or traits
HR teams need to be careful when using AI-powered hiring tools, as they might unintentionally trigger high-risk obligations under the AI Act.
Financial Institutions: AI used for assessing creditworthiness or setting credit scores falls under the high-risk category.
Insurance Providers: AI systems that assess risks or set pricing for life and health insurance policies also count as high-risk.
Deployers of high-risk AI systems must meet strict compliance requirements to ensure they’re being used responsibly and to minimise risks. These include:
Technical and Organisational Measures:
Make sure AI systems are used according to the provider’s instructions and are aligned with their intended purpose.
Human Oversight:
Assign a qualified person with the right training and authority to monitor the AI system and intervene when necessary.
Risk Monitoring and Incident Reporting:
Keep an eye on how the AI system is performing.
Notify the provider, distributor, and relevant Market Surveillance Authority (MSA) if the system presents risks to health, safety, or fundamental rights.
Suspend the system if a serious incident occurs.
Data Quality and Relevance:
Ensure that any data the deployer controls is relevant and representative for the AI system’s intended purpose.
Log Maintenance:
Retain logs generated by the system for at least six months, if they’re under the deployer’s control.
Employee Notification:
Inform employees and their representatives when they’ll be subject to a high-risk AI system.
Data Protection Impact Assessment (DPIA):
Use information from the AI provider to carry out a comprehensive DPIA, ensuring the system complies with data protection laws.
When Deployers Become Providers
In some situations, deployers might also take on the role of a provider under the AI Act. This can happen if:
They make significant changes to an AI system.
Their modifications turn a previously low-risk system into a high-risk one.
If your organisation customises or makes major changes to high-risk AI systems, you’ll need to comply with the provider obligations as well.
Conclusion
Key Takeaway
Deployers of high-risk AI systems can’t just use these tools—they need to make sure they’re being used safely, transparently, and in line with the rules. By putting the required measures in place and staying up to date, organisations can reduce risks and maintain trust, all while unlocking the full potential of AI.
In the next post, we look at the penalties for non-compliance and explore the governance framework that underpins the EU AI Act.
Get in touch to take a different approach to cyber security.