2.1.4. Who Should Govern AI? Institutions, Corporations, or the Public?
Who Should Govern AI? Institutions, Corporations, or the Public?¶
“When technology makes the decisions, the question becomes: Who gets to decide what the technology does?”
As AI systems increasingly shape decisions once made by humans such as who is hired, monitored, loaned money, or receives care the stakes for governance have never been higher. The essential question is no longer if AI should be governed, but by whom.
Should it be governments, whose legitimacy stems from public mandate but who often lag behind fast-paced innovation?
Should it be corporations, who lead AI development but also profit from its deployment?
Should it be the public, who are most impacted by AI, yet remain structurally excluded from shaping its trajectory?
In reality, AI governance today is driven by a mosaic of skateholders, but not in equal or democratic ways. To assess the current balance and its limitations we explore each stakeholder’s strengths, weaknesses, and sphere of influence.
Governments:
Governments play a crucial role in setting legal standards, protecting rights, and ensuring public safety. National AI strategies, public consultation processes, and emerging laws (like the EU AI Act and South Korea’s AI Basic Act) reflect growing recognition that AI cannot be left unregulated.
Strengths:
-
Can ensure fairness and public accountability.
-
Able to enforce legal consequences for misuse or harm.
-
Can align AI with democratic values and fundamental rights.
Challenges:
-
Move slowly compared to the pace of AI development.
-
Often lack technical expertise.
-
Vulnerable to lobbying and political pressure from industry.
Corporations:
Private companies, especially Big Tech firms, have historically led AI development. They possess the data, talent, and infrastructure to scale systems globally. Many have created internal ethics teams, AI principles, and advisory boards to self-regulate their technologies.
Strengths:
-
Fast-moving and resource-rich.
-
Capable of developing cutting-edge solutions and safety features.
-
Influential in global standard-setting (e.g., through ISO, IEEE).
Challenges:
-
Often prioritize profit over public interest.
-
Lack transparency or external accountability.
-
Ethics boards may serve as PR rather than enforceable oversight.
The Public:¶
While the public is most affected by AI systems, their involvement in governance is often limited to reacting after harm has occurred. There is growing momentum for more (1)participatory AI governance, such as citizen assemblies, (2)algorithmic impact audits, and rights to explanation.
-
A democratic approach to AI oversight that includes public input through tools like citizen assemblies, impact audits, or rights to explanation, ensuring that those affected by AI have a voice in how it's developed and used.
-
A structured evaluation that assesses the potential effects of an AI system on individuals and communities, focusing on equity, discrimination, and transparency before or during deployment.
Strengths:
-
Brings lived experience and community insight into governance design.
-
Reinforces democratic legitimacy.
-
Creates pressure for transparency and ethical alignment.
Challenges:
-
Limited access to technical systems or design processes.
-
No unified voice, interests and values may conflict.
-
Difficult to implement meaningful participation at scale.
These briefing suggest that no single skateholder can/ or should govern AI alone. The answer to “Who should govern AI?” is not found in choosing one sector, but in designing accountable systems where public institutions, private developers, and affected communities each have a defined role. Shared governance is not only emerging as the global norm it is becoming a democratic imperative.
As AI systems increasingly shape the fabric of society, the question of “who governs” is not merely academic it’s existential. Efforts to establish external AI ethics bodies have often failed to gain legitimacy or public trust. One example is Google’s short-lived ATEAC board, which was disbanded before it convened (see Case Study 009 and Section 2.2.3). So, rather than asking who alone should govern AI, the more productive question is how governance responsibilities can be distributed across sectors. Shared governance frameworks are emerging that reflect this integrated approach:
-
South Korea’s AI Basic Act, which involves both public institutions and private developers.
-
Singapore’s AI Verify, which offers tools for corporate self-assessment guided by public-sector benchmarks.
-
EU AI Act, which mandates external audits and documentation for high-risk systems.
While Section 2.1 introduced the conceptual foundations of AI governance including why it matters and who holds power governance is not only about structure. It is also about execution. In the next section, we examine how well principles have translated into action, and what happens when responsibility is diffuse and enforcement is absent.