Skip to content

1.5.1. Case 1 Self-driving cars and liability

Case 1 Self-driving cars and liability

“The car hit and killed the man, but there was no one who was responsible.”

This unsettling statement summarizes public reaction to the fatal 2018 self-driving car accident involving Uber in Arizona (see Case Study 001). Although the vehicle’s sensors recognized the pedestrian, it failed to brake in time. Meanwhile, the human safety driver did not intervene. A life was lost—but more critically, the incident exposed a legal vacuum: no clear framework existed to assign responsibility.

This case remains a defining example of why legal trustworthiness must be embedded into the design, regulation, and deployment of autonomous AI systems.

The Uber incident revealed gaps in both system behavior and legal infrastructure. Despite identifying the pedestrian, the AI system did not apply emergency braking. The vehicle was in autonomous mode, but a human safety driver was present—creating ambiguity over who was in control, and who was accountable.

The situation became even more complex when questions emerged:

  • Was the vehicle manufacturer at fault for failing to ensure system integrity?
  • Was the AI developer responsible for flawed object classification or response logic?
  • Should the vehicle owner or test operator bear liability for system oversight?

This diffusion of responsibility revealed what happens when *roles between human and machine are not clearly defined, a condition that undermines both legal recourse and public trust.

The Global Response: Clarifying Accountability

In response to such incidents, several governments have begun crafting legal frameworks for autonomous systems. A key example is the European Union’s AI Act, which explicitly addresses risk classification, safety compliance, and post-deployment monitoring for high-risk systems like autonomous vehicles.

The Act mandates that: - Manufacturers define and document responsibility during development
- Safety testing and audits be conducted throughout the lifecycle
- Operators monitor for anomalies and maintain accountability after deployment

By requiring traceable responsibility, the EU AI Act aims to ensure that when failures occur, legal accountability is no longer ambiguous.

Why This Case Still Matters

Without clear legal standards, victims may not receive proper compensation, and companies may continue to deploy systems that evade liability. Public backlash and regulatory hesitation can delay innovation—not because people oppose technology, but because they fear its consequences without accountability.

As autonomous vehicles become more embedded in daily life, legal trustworthiness becomes a prerequisite for public acceptance. Trust cannot be demanded, it must be earned through transparency, traceability, and responsible design.

This case illustrates that without clarity in legal roles and responsibilities, even advanced AI systems cannot move safely from test environments to public roads.