Skip to content

2.1.1. Understanding AI Governance The Foundation of Power and Accountability

Understanding AI Governance: The Foundation of Power and Accountability

If AI systems are shaping access to justice, employment, finance, and public services, then governance must do more than manage risk, it must define who gets to shape these systems, who benefits, and who is held responsible.

Rather than treating AI governance as a checklist of policies or procedural safeguards, we must recognize it as a structure of power: a mechanism that determines how authority, accountability, and values are distributed across technical, organizational, and institutional levels 1,2. Effective governance defines not just how AI is built, but by whom, for whom, and with what oversight. Developers, regulators, civil society, and the broader public all play a role in shaping these systems and in challenging the concentration of power they often reflect.

Yet, AI systems present unique governance challenges. Unlike conventional software, they often operate autonomously in high-stakes environments such as predictive policing, healthcare diagnostics, and financial decision-making. Their processes are frequently opaque, difficult to explain, and prone to reinforcing historical bias. Many are adaptive, capable of evolving over time, which makes traditional forms of auditing and oversight insufficient. These characteristics demand a new approach to risk management one that integrates transparency, accountability, and lifecycle-wide responsibility.

To meet this challenge, governance must go beyond technical safety. It must account for social, legal, and ethical implications across the entire AI lifecycle. Recognizing this need, several international standards have been developed to help organizations operationalize trustworthy AI governance through structured processes.

Among the most widely referenced are ISO/IEC 23894 3, ISO/IEC 38507 4, and ISO/IEC 42001 5. These standards address complementary aspects of governance from managing technical risk, to assigning board-level oversight, to embedding internal accountability mechanisms (tabulated in Table 4).

Table 4: Key ISO Standards Supporting AI Governance

Standard Scope Primary Focus Organizational Role
ISO/IEC 23894 AI Risk Management Identifying, assessing, and mitigating technical and societal risks across the AI lifecycle. Guides teams in monitoring risks related to safety, fairness, and privacy, and updating mitigation strategies as systems evolve.
ISO/IEC 38507 AI Governance at Leadership Level Ensuring AI aligns with business ethics, legal compliance, and public responsibilities. Provides boards and executives with frameworks for decision-making authority, data accountability, and ethical oversight.
ISO/IEC 42001 AI Management System (AIMS) Embedding governance mechanisms into internal AI development, monitoring, and improvement. Establishes lifecycle-wide structures for role assignment, internal audits, transparency, and traceable accountability.
What AI Governance Really Does?

AI governance ensures that responsibility is not left to chance. It defines who is accountable at every stage of the AI lifecycle from design to deployment to post-market response.

How ISO 38507 Assigns Board-Level Responsibility?

Under ISO/IEC 38507, boards must formally assign oversight of AI-related risks and ethics. This may involve appointing a Chief AI Governance Officer, setting up compliance reporting structures, and establishing review boards for high-impact systems. This makes accountability not optional but organizational.

Governance Is Not Just Policy, It’s Power

Governance determines who shapes AI systems, sets their limits, and holds authority over their use. In practice, this power is often concentrated in:

  • Tech companies that develop and deploy AI models,

  • Governments that set policy and regulation,

  • Standards organizations (like ISO or IEEE) that define operational norms,

  • Research institutions and civil society that shape public discourse and ethical guidelines.

When governance is absent or weak, power flows invisibly through algorithms trained on biased data, through business models that prioritize profit over fairness, and through regulatory gaps that fail to protect vulnerable populations.

Governance is the means by which we embed democratic values into technical infrastructure. Without it, AI becomes a black box that answers only to itself or worse, to whoever controls the box.

Three Layers of AI Governance

Effective AI governance operates at three interconnected levels:

Level Key Focus
System-Level Governance Technical controls, transparency, explainability, human oversight
Organizational Governance Internal policies, compliance practices, risk assessments, audit systems
Institutional Governance National laws, international standards, ethical principles, public trust

Each layer reinforces the others. A well-regulated institution sets the legal standard. An accountable organization implements it. And a well-designed AI system reflects those norms in its behavior.

To further understand how these governance layers function or fail, we turn to an example that illustrates a complete breakdown across all levels: [Clearview AI/], a U.S.-based startup developed a facial recognition tool scraping over 3 billion images from social media without user consent. The technology was sold to law enforcement agencies across the U.S., raising serious concerns about privacy, surveillance, and due process.

Governance Breakdown

  • System level: No explainability or user consent mechanisms; prone to racial bias.

  • Organizational level: Lack of internal accountability and ethical oversight.

  • Institutional level: At the time, U.S. law offered no federal regulation of facial recognition technologies.

Global backlash followed, where several U.S. cities banned the technology, while international regulators, from Canada to the EU, fined or ordered the company to delete unlawfully obtained data. The Clearview case makes one thing clear: AI governance is not just about preventing technical error, it’s about safeguarding human rights and democratic control over powerful technologies. With this foundational structure in place, we now turn to a more critical lens: how AI systems themselves wield power,not just through code, but through influence over human decisions, rights, and opportunities.

The Clearview case makes one thing clear: AI governance is not just about preventing technical error it’s about safeguarding human rights and democratic control over powerful technologies.

ThinkBOX: Who Governs the AI You Use?

Think of a service or app you use that incorporates AI (e.g., recommendation systems, hiring platforms, surveillance tools).

- **Who governs that system?**  
- **Is there a way for you to contest a decision it makes?**  
- **If not, how can that be made possible?**

Bibliography


  1. Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Amodei, D. (2020). Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv:2004.07213. 

  2. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. 

  3. ISO. (2023a). ISO/IEC 23894:2023 – Artificial intelligence—Risk management. International Organization for Standardization. 

  4. ISO. (2023b). ISO/IEC 38507:2022 – Governance implications of the use of artificial intelligence by organizations. International Organization for Standardization. 

  5. ISO. (2023c). ISO/IEC 42001:2023 – Artificial intelligence management system. International Organization for Standardization.