Practical Steps to Align Your Organisation with the EU AI Act
15 January 2025 // 5 Min Read
As the EU AI Act (EU Regulation 2024/1689) sets new standards for the responsible development and use of AI, organisations need to take proactive steps to ensure they’re fully compliant. Whether you’re already using AI systems or planning to adopt them, it’s crucial to understand your obligations under the Act.
In our previous post, we explored the penalties for non-compliance, the governance system, and how the Act extends its reach beyond the EU’s borders.
Below are some practical steps to help you navigate the requirements and align your organisation with the EU AI Act.
1. Develop an AI Policy
Start by creating or updating your organisation’s AI policy. This document should clearly outline the prohibited uses of AI as defined by the EU AI Act, including things like deploying AI systems that manipulate behaviour, exploit vulnerabilities, or involve banned practices.
Top Tip: Even if your organisation isn’t likely to intentionally engage in these activities, the broad definitions of the Act mean it’s essential to include these prohibitions in your policy. This not only helps you stay compliant but also shows your commitment to responsible AI use.
2. Establish AI Governance
Make sure you have the right person or team in place to oversee compliance with the AI Act. This could be your Data Protection Officer (DPO) or someone with the right technical expertise and seniority to manage high-risk AI systems.
Good governance ensures your organisation:
Monitors AI system performance
Effectively manages risks
Communicates clearly with stakeholders and regulators
3. Review AI Procurement Practices
If you’re commissioning or deploying high-risk AI systems, take extra care during procurement:
Ensure everyone involved understands the system’s risk level and the legal implications.
Avoid working with providers unless you’ve carried out thorough due diligence.
Be mindful of any customisation requests for third-party AI systems, as these might trigger additional compliance obligations.
4. Update AI Contracts
Your contracts should clearly outline the roles and responsibilities of all parties in the AI supply chain to ensure compliance with the Act. Key points to address include:
Considering potential regulatory changes over the course of the contract.
Setting expectations for data governance, transparency, and system design.
Top Tip: Check out the EU Commission’s model clauses for public procurement of AI systems. These can serve as a good starting point, even for private organisations, and include provisions around compliance. The Society for Computers and Law is also working on guidance and sample clauses to support businesses.
5. Review and Update Privacy Notices
Make sure your organisation’s privacy notices clearly explain how AI systems process personal data. Being transparent with employees, customers, and stakeholders helps build trust and ensures compliance with both the EU AI Act and GDPR.
6. Invest in AI Literacy
From 2 February 2025, providers and deployers of AI systems must meet the Act’s AI literacy obligations. This means:
Training employees and staff who operate or interact with AI systems.
Ensuring they have a “sufficient and appropriate level” of understanding about the risks, limitations, and proper use of AI.
By prioritising training, your team will be better equipped to operate AI systems safely and effectively, reducing the risk of non-compliance.
Conclusion
Take Action Now
Implementing these steps early will not only get your organisation ready for the EU AI Act but also position you as a leader in ethical and responsible AI deployment. With the regulation’s phased implementation already in motion, getting ahead of the game is key to avoiding penalties and maintaining trust in your AI systems.
This blog looks at the UK’s approach to regulating the use of AI and how that compares to what’s happening in other countries.
UK
The previous Conservative government was empathic that it would not introduce AI specific legislation. Instead, key regulators were asked to publish their individual strategic approaches to managing the risks presented by AI.
However,in July 2024 the new Labour government said it would “establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models” .
In January 2025 the government published its AI Opportunities Action Plan. The Plan says that “the UK’s current pro-innovation approach to regulation is a source of strength relative to other more regulated jurisdictions and we should be careful to preserve this”.
This implies no AI specific legislation in the foreseeable future (but conflicts with a comment made in October 2024 by Technology Secretary Peter Kyle that the UK government will bring legislation to “safeguard against the risks of artificial intelligence”).
EU
In August 2024 the EU AI Act came into effect. This introduced comprehensive, rules governing AI systems operating in the EU according to risk profiles. The implementation period is tiered, with many key provisions not taking effect until August 2026.
The EU leads globally on AI legislation and the extra-territorial scope of the EU AI Act means that any in-scope AI system used in the EU or whose output affects individuals in the EU is caught – regardless of where the developer or provider is located.
In any event, due to the nature of AI-based technology, there will be no material AI system that operates exclusively within one jurisdiction. This means that AI legislation, wherever passed, affects every jurisdiction. In other words, the UK is, de facto, subject to legislative standards of the EU AI Act.
Conclusion
The commercial reality for businesses operating internationally is that there will be a drift towards the highest standard of AI regulation, (albeit not without significant compliance challenges). So, maybe the EU AI Act (badged differently, so as to be seen as British) may be here sooner than we think.
Get in touch to take a different approach to cyber security.