Skip to content

The High-Stakes Contest Over AI Power

Who governs the machines that govern us?

Artificial intelligence is no longer just a tool, it is a governance system. It processes data, yes, but it also enforces classifications, allocates resources, and shapes real-world consequences. But when AI systems control access to jobs, loans, healthcare, or freedom, who governs these decisions?

Imagine an algorithm that flags someone as high-risk for fraud based solely on their zip code. Or a hiring system that filters out women because historical data said they weren't “good fits.” Or a predictive policing model that over-targets neighborhoods of color. None of these systems are fictional. They’ve been deployed in real life. And in each case, harm occurred, but no single person, institution, or process was clearly responsible. These aren’t just technical failures. They are governance failures. And they force us to ask urgent questions:

  • Who decides what AI is allowed to decide?

  • What happens when harm is distributed, but accountability is missing?

  • Can governance keep up with algorithmic power?

“The choice is not between good AI and bad AI. It is between AI that serves the people and AI that serves power.” — Shoshana Zuboff

In January 2021, a 20-year-old AI chatbot named Lee Luda took South Korea by storm (Case Study 008). Within just two weeks of launch, the chatbot had engaged over 750,000 users on Facebook Messenger. Marketed as a friendly conversational AI trained on real chat data, Luda was designed to reflect how young people communicate in everyday life. But soon, it became evident that something had gone wrong. Luda began spewing hate speech, making discriminatory remarks about minorities, and expressing deeply offensive opinions as shown in the Figure 14. Public outrage followed. The service was suspended. Investigations revealed that the model had been trained on real user conversations without proper consent or (1)ethical oversight.

  1. Ethical oversight involves formal review processes that ensure AI systems align with ethical principles such as fairness, accountability, and respect for human rights.

What happened wasn’t a failure of code. It was a failure of governance.

Screenshot of Lee Luda Chatbot's chat page, where Lee Luda's failure reveals how unfiltered data and absent oversight can turn conversational AI into a source of public harm.

Figure 14: Screenshot of Lee Luda Chatbot's chat page, where Lee Luda's failure reveals how unfiltered data and absent oversight can turn conversational AI into a source of public harm.
(Source)

Case Study 008: The Lee Luda Chatbot Incident (Location: South Korea | Theme: Data Ethics and Consent Oversight)

📌 Overview:
In early 2021, the South Korean company Scatter Lab launched an AI chatbot named “Lee Luda” on Facebook Messenger. The chatbot was designed to simulate casual conversations typical of South Korean youth and quickly gained over 750,000 users within two weeks. It was trained using anonymized real-life chat data extracted from a mobile app.

🚧 Challenges:
The chatbot produced offensive and discriminatory remarks. The training dataset had been sourced from user conversations without proper consent or ethical review.

🎯 Impact:
Public criticism led to widespread discussion on AI training data ethics. The platform suspended the service amid media scrutiny.

🛠️ Action:
The chatbot was taken offline. The incident prompted evaluation of data consent practices and ethical standards in Korean AI development.

📈 Results:
The case influenced updates in domestic AI ethics discussions and called attention to the need for stricter data governance protocols.

The Luda case exposed how a lack of institutional control, ethical accountability, and transparency in AI development can lead to significant societal harm. It raised uncomfortable questions: - Who authorized the release of this model? - What safeguards were in place? - And ultimately, who was responsible?  

This is the emerging reality of modern AI: a technology that doesn’t just support decision-making it influences behavior, allocates resources, enforces norms, and shapes how we perceive truth. The moment AI starts acting with real-world consequences, deciding who gets hired, surveilled, loaned money, or even imprisoned, it no longer operates as a mere tool, it becomes a force that demands governance and accountability. 

The Real Race: AI as a Site of Power and Control

AI systems today are developed and deployed by a relatively small group of  stakeholders primarily major tech corporations and, to a lesser extent, government agencies 1. Global standards bodies, while not directly involved in development or deployment, influence governance by defining best practices and technical frameworks. This concentration of power means that decisions about how AI behaves, and who it benefits or harms, are not made democratically. Instead, they’re often embedded in opaque models trained on biased data and shaped by institutional incentives. 

The stakes are high. Left unchecked, AI can reinforce inequality, amplify misinformation, enable surveillance practices that infringe on privacy, and erode democratic processes 2,3. But overregulation can also stifle innovation and limit the potential benefits of AI in healthcare, education, climate research, and more. The tension between freedom to innovate and the need to govern is at the heart of this chapter. In light of these tensions, it becomes essential to ask: What systems exist to ensure that AI technologies serve the public good, rather than narrow interests?

AI Governance: From Technical Design to Societal Impact

Governance in AI encompasses the systems, institutions, and frameworks responsible for overseeing how artificial intelligence is developed, deployed, and monitored. It touches on:

(1) Accountability: Is it clear who holds responsibility when AI systems cause harm or malfunction? 

  1. Accountability is the obligation of individuals or organizations to take responsibility for the outcomes of AI systems, including errors, harm, or bias

(1) Transparency: Are the system’s purpose, data sources, limitations, and evaluation methods clearly disclosed? 

  1. Transparency in AI means disclosing how systems work, what data they use, and what limitations they have, to ensure they can be evaluated, understood, and questioned by humans.

(1) Human oversight: Are there mechanisms for meaningful human intervention before, during, or after AI operation?

  1. Human oversight is the ability for humans to intervene in or supervise AI system behavior, especially in high-risk or consequential settings.

(1) Regulatory compliance: Is the system demonstrably aligned with legal and safety requirements defined by applicable law or regulation? 

  1. Regulatory compliance refers to the degree to which an AI system conforms to laws, regulations, and policies that govern its development and use including safety, privacy, and discrimination standards.

But these are not just technical questions, they are questions of power. As AI becomes embedded in everything from justice systems to job applications, governance becomes the tool through which values are encoded into infrastructure.

To understand how governance can function as a structure of power and accountability, we begin with its foundation: what AI governance is, why it exists, and how it distributes responsibility across people, systems, and institutions.

Bibliography


  1. Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. 

  2. Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press. 

  3. Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press