Skip to content

2.3.1. The EU AI Act- A Risk-Based Regulation with Global Reach

The EU AI Act: A Risk-Based Regulation with Global Reach

The European Union’s AI Act, passed in 2024, is the world’s first legally binding framework for the governance of artificial intelligence. It stands out not only for its scope but for its core design: a shift away from voluntary ethics and toward infrastructure-based accountability.

The EU Act transforms ethical goals into legal consequences, making accountability enforceable. Unlike ethics boards or voluntary guidelines, the EU AI Act is enforced through:

  • Cross-border regulatory coordination

  • Audit-ready documentation and traceability

  • Alignment with GDPR and broader digital rights legislation (e.g., Digital Services Act)

Unlike other governance models that rely on informal pledges or agency discretion, the EU AI Act embeds mandatory obligations directly into the AI lifecycle from system design to post-deployment monitoring. Central to the Act is its risk-based approach to regulation. Instead of regulating all AI systems equally, it assigns legal responsibilities based on societal risk.

Implementation Tools

For high-risk AI systems (2.1.3) such as those used in biometric surveillance, hiring, education, or financial services. For these applications, the Act imposes strict obligations across the full AI lifecycle embedding governance not just post-deployment, but also into system design and development. The key obligations are tabulated in Table 7.

Table 7: EU AI Act Obligations for High-Risk AI Systems

Obligation Area EU AI Act Article(s) Description
Risk Management System Article 9 Ongoing system to identify, assess, and mitigate risks throughout the lifecycle.
Data Governance and Quality Article 10 Use high-quality, representative, and unbiased datasets for training, validation, and testing.
Technical Documentation Article 11 Maintain comprehensive technical files including system design, data sources, and controls.
Record-Keeping and Logging Article 12 Enable automatic event logging and retention for traceability and incident analysis.
Transparency to Users Article 13 Inform users about system purpose, functioning, limitations, and usage instructions.
Human Oversight Measures Article 14 Design systems for meaningful human oversight and intervention in decision-making.
Accuracy, Robustness, Cybersecurity Article 15 Ensure accuracy, resilience to faults, and protection against security threats.
Conformity Assessment (3rd-party) Articles 19–20 Undergo certification by a Notified Body to verify compliance before deployment.
Post-Market Monitoring Articles 61–63 Actively monitor systems after launch and report serious incidents to regulators.
Penalties for Non-Compliance Article 71 Non-compliance can result in fines up to €35 million or 7% of global turnover.

These obligations are embedded into the design, development, and deployment phases not just as audits after release.

Governance Impact

The EU AI Act shifts AI governance from ethical aspiration to legal enforcement. t moves beyond high-level principles and embeds accountability mechanisms across the entire AI lifecycle. irms operating in the EU must now ensure their systems are transparent, documented, and auditable by default changing how companies structure their AI compliance infrastructure.

Criticisms or Trade-offs

  • Smaller developers may struggle with costly compliance, especially in borderline risk categories.

  • Enforcement complexity could lead to delays in approvals or innovation slowdowns.

  • Some experts worry that static classifications may not adapt to emerging hybrid AI models.

Global Relevance

Like GDPR, the EU AI Act exerts regulatory gravity: multinational companies align with EU standards even outside Europe to preserve market access. Countries such as Brazil, Canada, South Korea, and blocs like the African Union have cited the Act in shaping their own governance frameworks. It may soon define minimum global expectations for AI accountability.