Navigating the EU AI Act: A comprehensive guide for businesses

Introduction

The European Union (EU) AI Act is a landmark legislation aimed at regulating the development and use of artificial intelligence (AI) within the EU. Proposed in April 2021, the Act was finalised in December 2023, with its provisions set to come into force starting from 2026. The Act introduces a horizontal framework of obligations for AI developers and users, emphasising a risk-based approach to ensure AI systems are safe, transparent, and trustworthy.

If a business is using, supplying or developing AI and is operating in the EU, they must comply with the AI Act. If the AI systems are categorised as high risk, the firm must undertake a conformity assessment and is likely to need an external auditor to independently assess compliance. But for firms not caught by the AI Act, compliance is good practice nonetheless and is likely to prepare businesses for potential future UK and US legislation.

Fifty One Degrees - EU AI Act Compliance Audit

Risk-Based Approach is Fundamental to the EU AI Act

The EU AI Act categorises AI systems into four risk levels: unacceptable risk, high risk, limited risk, and low or minimal risk.

  • Unacceptable risk: AI systems that pose a significant threat to individuals’ rights and safety are prohibited. This includes AI applications such as social scoring systems and real-time biometric identification used for surveillance.
  • High risk: AI systems that could potentially cause harm are subject to stringent requirements, including CE marking. These include AI used in critical infrastructure, education, employment, law enforcement, and healthcare. High-risk AI systems must undergo a conformity assessment to ensure compliance with the Act’s requirements, including having an external auditor independently assess compliance.
  • Limited risk: These AI systems must meet transparency obligations, ensuring users are informed when interacting with AI. Examples include chatbots and AI-generated content tools.
  • Low risk: These AI systems, such as spam filters and AI-enabled recommendations, are largely unregulated but must still comply with general EU standards.

Real-World Examples in Financial Services

Example A: Automation of Internal Processes Using AI

A bank uses AI to automate internal processes such as quality assurance, task automation, compliance checks and report generation. These systems are considered limited-risk but must still comply with transparency requirements. The bank ensures employees are informed about the AI’s role in these processes and maintains rigorous data protection standards. Regular audits are conducted to verify the system’s accuracy and reliability, ensuring compliance with the AI Act and building trust in the system’s outputs.

Example B: AI-Powered Fraud Detection

A bank implements an AI system to detect fraudulent transactions in real-time. It must comply with the high-risk requirements of the EU AI Act, so the bank conducts a thorough risk assessment and ensures the system is transparent and explainable. They provide detailed documentation on how the AI system works and train staff to understand and oversee the AI’s decisions. Regular audits and continuous monitoring to detect and mitigate any biases or inaccuracies are also implemented.

Example C: Automated Loan Approval

A financial institution develops an AI system to automate elements of loan approvals. The system is also classified as high-risk due to its potential impact on individuals’ financial lives. To comply, the institution conducts a conformity assessment, ensuring the AI system adheres to technical standards set by European Standardisation Organisations (ESOs). They implement robust data governance practices, including data minimisation and transparency measures, to protect applicants’ personal information and ensure fair decision-making processes.

General-Purpose AI (GPAI)

The Act specifically addresses general-purpose AI (GPAI), which are versatile AI models capable of performing a variety of tasks. Systemic GPAI models, due to their potential impact, are subject to additional requirements including rigorous risk assessments, cybersecurity measures, and incident reporting.

Territorial and Extraterritorial Application

One of the unique aspects of the EU AI Act is its territorial and extraterritorial application. The Act applies not only to AI systems used within the EU but also to those developed outside the EU if their outputs are utilised within the Union. This ensures that non-EU entities cannot bypass regulations by merely outsourcing AI tasks to regions with less stringent laws.

Copyright Compliance

A significant provision in the Act is the requirement for GPAI models to comply with EU copyright laws, including providing detailed summaries of training data. This aims to establish EU copyright standards globally, ensuring fair competition and protection of intellectual property.

Interplay with GDPR

The EU AI Act intersects with the General Data Protection Regulation (GDPR), particularly concerning the processing of personal data. While the GDPR focuses on data privacy, the AI Act addresses the ethical and safe use of AI systems. Businesses must ensure compliance with both regulations, especially when dealing with high-risk AI systems that process personal data.

Penalties for Non-Compliance

The EU AI Act imposes strict penalties for non-compliance. The highest fines, up to €35 million or 7% of worldwide annual turnover, are reserved for violations involving prohibited AI practices. Other breaches, including failing to meet transparency requirements or GPAI obligations, can result in fines up to €15 million or 3% of global turnover.

Preparing for Compliance

Businesses can take several steps to prepare for the AI Act:

  1. Appoint external auditor: Seek expert advice and independently assess compliance.
  2. Identify AI Systems: Create an inventory of existing and planned AI systems.
  3. Assess Applicability: Determine which AI systems are subject to the Act and their respective risk categories.
  4. Conduct Gap Analysis: Evaluate current practices against the Act’s requirements, focusing on risk management, data governance, and legal compliance.
  5. Develop a Compliance Roadmap: Plan and implement measures to address identified gaps.
  6. Establish AI Governance: Designate responsible personnel, such as an AI officer, to oversee compliance efforts.
  7. Implement Regular Audits: Ensure continuous monitoring and assessment of AI systems for compliance.
  8. Staff Training: Educate employees on the AI Act and its implications to ensure informed and compliant AI use and development.

Compliance Strategies for the EU AI Act

Ensuring compliance with the EU AI Act requires a structured approach encompassing risk management and data governance.

Risk Management

Effective risk management is essential for identifying and mitigating potential hazards associated with AI systems. Techniques include:

  • Risk Assessment Frameworks: Implement risk management frameworks effectively.
  • Continuous Monitoring: Establish processes to detect and address risks in real-time, ensuring regular audits and updates to risk management protocols.
  • Stakeholder Engagement: Involve stakeholders in the risk management process to incorporate their feedback and concerns.

Data Governance

Robust data governance ensures data quality, transparency, and compliance with both the AI Act and GDPR. Best practices include:

  • Data Quality Management: Ensure data is accurate, complete, and reliable by following recognised frameworks.
  • Transparency and Accountability: Maintain transparency in data processing activities, documenting data sources, methods, and usage purposes.
  • Data Protection Measures: Implement robust measures like encryption and anonymisation to safeguard personal data.

By integrating these strategies, businesses can ensure their AI systems are safe, ethical, and compliant, fostering trust and innovation in the AI landscape.

Key Dates and Deadlines for the AI Act

Understanding the timeline of the EU AI Act is crucial for businesses to plan their compliance efforts effectively. Here are the important dates and deadlines associated with the Act:

  • May 2024: The AI Act is being published in the Official Journal of the EU, marking its formal entry into force 20 days later​​.
  • December 2024: By this time, prohibitions on unacceptable risk AI systems will become applicable. This includes AI for subliminal manipulation, social scoring, and real-time biometric identification​.
  • May 2026: The main body of the Act becomes applicable, and so all businesses need to ensure compliance. This includes obligations for high-risk AI systems, which cover sectors such as biometrics, critical infrastructure, and law enforcement. Member states are also required to establish at least one operational AI regulatory sandbox by this time​.

Conclusion

The EU AI Act represents a significant step towards regulating AI, ensuring it is used ethically and safely. Businesses operating within or with the EU market must proactively prepare for these regulations to avoid substantial fines and ensure their AI systems are trustworthy and compliant. By understanding the Act’s requirements and implementing robust governance and compliance measures, organisations can navigate the complexities of the AI landscape and leverage AI’s potential responsibly.

Share this post:

Related Posts

Talk to one of our consultants.