Skip to content

5.1.2. Delegation vs. Abdication- When Do We Lose Control in AI Design?

5.1.2 **Human-in-the-Loop vs Human-on-the-Sideline – Designing for Real Oversight

AI developers like to claim their systems are “human-in-the-loop” (HITL). It sounds safe. It sounds ethical.

But let’s ask the harder question: what does that loop actually look like?

Too often, the human isn’t in the loop at all. The human is at the edge , watching the machine act, unable to see enough, interrupt fast enough, or understand deeply enough to stop disaster.

Case Study 017: 2018 Uber Self-Driving Car Fatal Crash (Location: USA | Theme: Oversight and Human-in-the-Loop Failure)

🧾 Overview
In March 2018, an Uber self-driving test vehicle struck and killed a pedestrian in Tempe, Arizona. The vehicle’s system misclassified the pedestrian several times and did not trigger emergency braking. The human safety driver was not alerted in time.

🚧 Challenges
The system failed to detect uncertainty, escalate risk, or hand control to the human driver. There were no meaningful alerts or override paths.

💥 Impact
The crash was the first recorded pedestrian death involving a self-driving car. Uber suspended testing, and public trust in autonomous vehicle safety was shaken.

🛠️ Action
Uber improved its perception and safety systems, added emergency braking in autonomous mode, and changed its test protocols.

🎯 Results
The case highlighted the dangers of weak human-in-the-loop design and the need for real-time oversight in autonomous systems.

The 2018 Uber self-driving car fatal crash 3 in Arizona makes this painfully clear. The Level 3 vehicle encountered a pedestrian, Elaine Herzberg, crossing a dimly lit road at night. The vehicle’s perception system struggled: it first classified her as an unknown object, then as a vehicle, then as a bicycle , but never as a pedestrian. Each misclassification delayed the system’s risk escalation. The model didn’t surface its uncertainty. It didn’t ask for help. It didn’t slow down.

The decision logic compounded the failure. The system assumed that if something went wrong, the human safety driver would step in. But this assumption lived only on paper. The model provided no meaningful alert. No warning signal. No structured handover. The driver had no opportunity to act, no time to process what was happening, no authority in the moment that mattered most. The vehicle maintained its course , confident, unchallenged, and wrong.

Elaine Herzberg died. And with her died the illusion of the human-in-the-loop.

Who was in control? No one. Not the model. Not the machine. Not the human.

It wasn’t a failure of governance frameworks. It was a failure born in the model’s design:

  • A perception model that was built to label , but not to confess uncertainty, not to escalate confusion, not to engage the human when it faltered.
  • A decision logic that was built to act , but not to pause, not to yield authority, not to coordinate with human judgment when doubt arose.
  • An architecture that could pilot itself , but not partner with its human overseer, not share control, not create a bridge between automation and human command.

The machine did exactly what it was designed to do. And that design excluded the human from the critical path at the critical moment.

Systems claim many forms of human oversight like (1) Human-in-the-Loop (HITL), (2) Human-on-the-Loop (HOTL) and (3) Human-in-Command (HIC)

  1. The human directly intervenes in decisions (e.g., doctors approving AI diagnoses).
  2. The human monitors and steps in if needed (e.g., supervising autonomous vehicles).
  3. The human has full control, with AI as assistant (e.g., critical infrastructure).

But claiming these roles means nothing if the model doesn’t support them. The Uber system exemplified this: it assumed HOTL but built no pathways for the human to monitor, intervene, or command effectively.

💬 “Oversight without authority is illusion. You haven’t built a safe system, just a scapegoat.”
, Adapted from accountability principles in ISO/IEC 23894

📌 Techniques that enable model-level human authority

The lesson from this failure is clear: if a model is designed only to act, but not to cooperate , only to decide, but not to defer , it leaves the human powerless when it matters most. Oversight is not something that can be added later. It must be built into the model itself. The following techniques show how model development can embed real human authority, turning oversight from a promise into a functional reality.

  • Uncertainty-aware architectures : Models that know when they are unsure, and can trigger early human intervention before harm occurs.
  • Interruptible inference flows : Models designed so outputs can be paused, overridden, or redirected on command , not forced through to completion at all costs.
  • Causal model structures : Models that expose reasons and logic chains, not just black-box predictions , enabling humans to judge and challenge decisions.
  • Simulation-based edge-case handover validation : Development-stage testing to prove that handover scenarios actually allow humans to regain meaningful control in time.

💼 Legal Lens (EU & Korea) Under Article 14 of the EU AI Act 1, high-risk AI systems must be designed for effective human oversight throughout their lifecycle. South Korea’s AI Basic Act (2025) 2 similarly mandates transparency and reviewability for high-impact AI systems at all stages of deployment and use.

✅ Key Takeaway

A system with a human nearby is not the same as a system under human control.
Oversight is only real when humans can see what’s happening, understand why, and act before it’s too late.

In the next section, we turn from oversight design to interpretability, exploring how models that look explainable can still hide faulty logic beneath the surface.

Bibliography


  1. European Parliament and Council. (2024). EU Artificial Intelligence Act – Final Text. Article 14. https://eur-lex.europa.eu 

  2. Republic of Korea. (2025). AI Basic Act (Artificial Intelligence Framework Act). Ministry of Science and ICT. 

  3. National Transportation Safety Board. (2019). Preliminary report: Highway HWY18MH010. https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-preliminary.pdf