Skip to content

2.2.2. Power and Policy- Who Really Shapes AI Governance?

Power and Policy: Who Really Shapes AI Governance?

If laws shape AI, who shapes the laws?

In theory, regulations are meant to serve the public good. In practice, however, the rulemaking process is often dominated by those with the most economic leverage and institutional access. Industry lobbying, self-regulation frameworks, and elite policy circles frequently shape how AI systems are governed sometimes more than democratic oversight or public input 1.

Governance doesn’t happen in a vacuum. It is shaped by power: who writes the rules, whose interests are prioritized, and whose voices are excluded.AI regulation is often influenced as much by industry interests and lobbying as by legislative intent, which is tabulated in Table 6.

Table 6: Key Stakeholders and Their Roles in Shaping AI Policy

Stakeholder Influence Strategy
Governments National strategies reflect geopolitical goals, economic priorities, and legal tradition
Tech Corporations Shape or delay policy through lobbying, self-regulation frameworks, and standard-setting
Industry Standards Bodies Influence definitions of fairness, safety, and risk through ISO, IEEE, and consortia
Civil Society Push for ethical oversight and public participation, often after harm occurs

These influences are not always visible to the public, yet they shape everything from data access requirements to how algorithmic accountability is defined. These dynamics aren't just theoretical they play out in real decisions about who controls AI systems, who is protected, and who is exposed to harm.

As shown in Case Study 008: Clearview AI, a facial recognition company scraped billions of images from social media without consent and sold the tool to U.S. law enforcement. Despite strong civil society backlash, the absence of federal regulation allowed the company to operate unchecked, while countries like the EU, Canada, and Australia acted swiftly to impose penalties and demand data deletion. The case illustrates that AI governance is not merely a matter of national policy, it is a question of who defines the rules and whose interests are prioritized. It also reveals the risks of governance structures dominated by commercial influence and lacking public transparency.

Governance must therefore evolve from a backroom negotiation to a transparent process shaped by democratic input, civil society advocacy, and equitable representation. Only then can AI governance reflect the interests of the many, not the few. Understanding who sets the rules is only half the story. Equally important is whether those rules can be enforced. We continue to examine why governance must move beyond ethics to operational accountability?

Regulatory Capture in AI

When industries shape the rules meant to regulate them, accountability is weakened. In AI, this happens through industry-funded ethics boards, lobbying for weaker laws, and direct influence over legislation.

Fact: From 2022, tech giants like Open AI, Google/Alphabet, Microsoft, Amazon, Meta lobbied to soften the EU AI Act’s “high-risk” definitions reducing audit obligations. (Source)

Favorable Response: - General-purpose AI (GPAI) like GPT-4 was initially excluded from strict regulation under the “high-risk” category.

  • The final version of the EU AI Act (provisionally agreed in December 2023) distinguishes between “high-risk” AI systems and GPAI, allowing more flexibility for foundation models.

  • France, Germany, and Italy pushed for voluntary codes of conduct rather than mandatory rules for foundation models, and the EU conceded to this structure in part.

Strictness: In response to criticism from civil society and the European Parliament, the EU introduced a tiered approach for GPAI models:

  • Standard GPAI models: subject to light transparency rules (e.g., documentation, copyright declarations).

  • “Systemic risk” GPAI models (e.g., ChatGPT, Gemini): required to undergo more stringent risk assessments, reporting obligations, and incident reporting.

Key Risk: If AI is governed by its creators, trust and fairness become optional.

Bibliography


  1. Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.