1.1 IIf AI Kills People, Who Is Responsible?
If AI Kills People, Who Is Responsible?¶
Artificial Intelligence (AI) is no longer merely a technical tool — it has become a decision-making agent embedded in systems that influence life, health, and safety. As these systems grow more autonomous, we must ask a critical question:
“When technology makes decisions that impact life and death, who do we hold accountable? Who bears responsibility?”
Consider these examples:
-
Autonomous Vehicles:
A self-driving car fails to recognize a pedestrian at night and causes a fatal accident. Should responsibility fall on the manufacturer, the algorithm developer, or the passenger? -
Medical Diagnosis AI:
An AI model recommends the wrong cancer treatment based on flawed data inputs. Is the fault with the hospital, the AI vendor, or the data provider?
These scenarios expose a deeper problem: when AI acts on our behalf, responsibility becomes diffuse. While some argue that AI is merely a tool designed by humans, others contend that its autonomous behavior makes it akin to an independent agent, one capable of taking decisions beyond direct human command.
The implications go beyond technical accountability(1); they demand ethical, legal, and stability. How do we build governance systems that define, distribute, and enforce responsibility across the AI lifecycle?
- The state of being answerable for decisions and actions, including the ability to demonstrate and justify those decisions and actions.(ISO/IEC 23894)
Quote
“Technology has no conscience. People do.”— Sherry Turkle, MIT sociologist and author of Reclaiming Conversation
AI systems may execute decisions, but they do not understand consequences. Behind every algorithm is a human — a designer, developer, or deployer — who made choices that shaped its behavior. When harm occurs, responsibility cannot be deflected onto machines. It must be traced back to the people and institutions that built, permitted, or failed to govern them.