Skip to content

2.1.2. AI as Power- Beyond Code and Algorithms

AI as Power: Beyond Code and Algorithms

When people think about artificial intelligence, they often picture a system that "thinks" or "learns", a piece of software executing code, running models, optimizing performance. However, AI does more than process data. It makes decisions, influences behavior, predicts outcomes, and enforces classifications, all of which affect people’s lives in visible and invisible ways.

AI systems don’t merely function as tools they act as decision-makers that shape access, visibility, and power. It is a mechanism of power a force that can control, restrict, or enable actions at scale. And, as with any system of power, the questions of who designs it, who controls it, and who benefits from it are central to governance.

AI systems now make, or support, decisions once made by people 1:

  • Predictive policing tools suggest who might commit a crime.

  • Hiring algorithms filter out candidates based on behavioral cues or résumé data.

  • Credit scoring models decide who gets loans, and who doesn’t.

  • Social media feeds shape what people see, believe, and fear.

These decisions are not neutral. They are shaped by training data, model design, and organizational intent. If a model is trained on biased data (Automation Bias), it will likely produce biased outcomes. If optimization favors speed or profit, fairness may be compromised often without intention, but with real consequences.

The shift from human to (1) algorithmic decision-making raises a fundamental governance question:

  1. The process of making judgments or choices through automated systems, often without direct human involvement, using predefined rules, models, or machine learning.

"When machines mediate power, how can the human rights be protected?"

These algorithmic systems increasingly influence life-altering decisions, but their outputs are often accepted as objective or unbiased, simply because they are generated by machines. This perception masks a deeper problem: the widespread belief that AI systems are neutral tools, rather than products of human values, assumptions, and priorities. But this framing overlooks how:

  • Data reflects historical inequities (e.g., gender or racial bias in criminal justice data).

  • Models encode value judgments (e.g., what counts as a “good” applicant).

  • Deployment choices prioritize certain outcomes over others (e.g., speed over fairness).

These aren’t just engineering choices. They’re political and ethical decisions, often made without public input or democratic oversight. Importantly, they are not always the result of deliberate bias or intent. In many cases, such as Amazon’s résume screening tool, developers did not set out to discriminate. But in the absence of fairness audits, diverse testing, or accountability structures, models learned from and replicated historical inequities.

AI Governance as a Catalyst for Sustainable Innovation

Trustworthy AI depends on frameworks that define who is responsible, what is acceptable, and how risks are managed, especially in high-impact applications. Governance is not the against the innovation, it’s what makes innovation sustainable.

In this way, AI becomes a form of governance without consent, a quiet reshaping of systems that define fairness, opportunity, and truth.

In 2023, researchers at Stanford and Princeton 2 revealed that GPT-4, when used to simulate standardized test help (such as SAT reading comprehension), produced systematically biased outputs across different demographic prompts. When asked to “respond as a Black student” or “as an Asian student,” the model returned different-quality answers often less accurate or less developed for historically marginalized identities, even when the underlying question was identical.

Reason for failure

  • The training data embedded historical inequities, which were not corrected through counterbalancing or fairness regularization.

  • The system lacked contextual fairness testing during evaluation a core oversight considering the social sensitivity of educational guidance.

  • The responsibility for evaluating demographic impacts was diffuse, with no formal governance checkpoint or designated team overseeing ethical implications.

What Happened next? :  While this was a research simulation rather than a commercial deployment, the study was later de-emphasized and not continued publicly. No institutional accountability followed. Nonetheless, it became a widely cited example of how advanced AI systems can silently perpetuate institutional bias and how the lack of governance structures can lead to ethical and operational failure.

As discussed in Section 2.1.1 on Governance Breakdown, this failure illustrates a collapse across system-level transparency, organizational oversight, and institutional safeguards. Had GPT-4’s development pipeline incorporated lifecycle governance mechanisms, the disparity could have been detected and addressed before public exposure. This case underscores the urgent need for embedded governance rather than reactive patching, particularly when AI systems shape outcomes for real people.

The challenge is not just technical; it is structural. As AI systems increasingly govern people without clear lines of accountability, the need for governance frameworks that protect public interests becomes urgent.

ThinkBOX: Who Governs the AI You Use?

Should AI systems be required to publish explanations of how they make decisions that affect people’s lives? If so,.

- Who should define what counts as a “sufficient” explanation?
- Who gets access to it?

Bibliography


  1. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. https://fairmlbook.org 

  2. Bommasani, R., Li, L., Scharli, N., et al. (2023). Evaluating Bias in Large Language Models: Evidence from GPT-4. arXiv preprint. https://arxiv.org/abs/2304.01852