Understanding the multifaceted nature of these risks is essential for business leaders, regardless of whether they are creating AI systems or merely using them. Here are the key considerations every board should be aware of:
- The Use of AI in Their Business
- The AI Regulatory Landscape
- The Benefits and Risks of AI
The Regulatory Landscape of AI
The European Union (EU) has adopted a risk-based approach to AI regulation, culminating in the introduction of the EU AI Act, which came into force on August 1, 2024, and will be fully effective from August 2, 2026. This landmark legislation governs all AI models marketed or utilised within the EU, categorising them into four tiers based on the level of risk they present:
- Unacceptable Risk: These AI systems are prohibited outright due to their potential for harm.
- High Risk: These require rigorous compliance with specific regulations and obligations.
- Limited Risk: These entail some oversight but are subject to less stringent requirements.
- Minimal Risk: These are subject to very few obligations, as they pose the least risk.
Each risk category carries unique assessment, disclosure, and governance obligations that organisations must navigate. Detailed compliance requirements will be elaborated upon in forthcoming guidance. In the interim, it is critical for boards of companies marketing or using AI models within the EU to stay informed about the implications of the EU AI Act.
In the UK, while there is currently no equivalent to the EU AI Act, the government has signalled its intent to develop AI safety legislation, indicating a shift towards a more structured regulatory framework.
Guiding Principles for Boards
To effectively manage the risks associated with AI, boards should adopt the following guiding principles:
Understand the AI Risk
Boards must gain a comprehensive understanding of how AI technologies are employed within their organisations. This understanding includes not only the potential benefits AI can bring but also the specific risks that arise from its use. Whether its issues related to data privacy, algorithmic bias, or operational reliability, being aware of these factors is essential for informed decision-making.
Establish an AI Governance Framework
Implementing an appropriate governance framework is crucial for managing AI-related risks. Boards should ensure that their organisations have robust processes in place to monitor and mitigate these risks effectively. Regular reviews of this framework will help ensure it remains relevant and is being properly enforced, facilitating ongoing compliance and risk management.
Stay Informed About Sector-Specific Risks and Regulations
Given the rapid evolution of AI technology and regulatory environments, boards must remain vigilant about industry-specific risks and regulations. Engaging with industry groups, regulatory bodies, and thought leaders can provide valuable insights into emerging challenges and best practices, enabling organisations to stay ahead of the curve.
Conclusion
As AI continues to reshape the way we conduct business, the responsibility to understand and mitigate associated risks falls squarely on the shoulders of company boards. By adhering to a clear framework that emphasises risk awareness, governance, and ongoing education, boards can better navigate the complexities of AI regulation and unlock the potential benefits of these transformative technologies. The future of AI in business is bright, but proactive governance and informed oversight are essential to ensure that organisations harness its capabilities responsibly and sustainably.
Written by Robert Wassall
Robert Wassall is a solicitor, expert in data protection law and practice and a Data Protection Officer. As Head of Legal Services at NormCyber Robert heads up its Data Protection as a Service (DPaaS) solution and advises organisations across a variety of industries. Robert and his team support them in all matters relating to data protection and its role in fostering trusted, sustainable relationships with their clients, partners and stakeholders.