Skip to content

1.1.2. When a patient loses their life due to a misdiagnosis of medical AI, who is responsible?

When a patient loses their life due to a misdiagnosis of medical AI, who is responsible?

Automation Bias in Healthcare AI

Even when AI doesn’t cause physical harm, it can silently reshape how humans make decisions. In healthcare, this takes the form of automation bias (1), a growing ethical risk where clinicians begin to over trust AI systems, sometimes at the cost of patient safety.

  1. Automation bias is a phenomenon in which people place too much trust in the judgment of automated systems and gradually shift the responsibility for their own judgments to the system.

In healthcare, AI is showing the potential to revolutionize diagnosis and treatment with efficiency and accuracy. By analyzing vast amounts of data and detecting patterns, AI helps spot signs of disease that humans might miss. However, behind these technological advancements lies a serious ethical risk that is often overlooked in practice. Among them, automation bias is one of the most serious, as visually depicted in Figure 2, which illustrates how medical professionals may become overly dependent on AI systems.

Automation Bias in Healthcare AI

Figure 2: Automation Bias in Healthcare AI
A symbolic representation of the growing dependence on AI by medical professionals, where human judgment is gradually overshadowed by machine-generated decisions.
[Illustrative AI-generated image]

Healthcare AI starts out as a simple assistive tool, but as AI repeatedly delivers accurate results, medical staff may begin to put blind faith in it and skip the process of self-validation. This results in a situation where even when the AI's judgment is wrong, it is not detected.

In fact, a 2016 paper titled "Automation bias and verification complexity: a systematic review" experimentally demonstrated that as medical staff begin to trust AI in their diagnoses, they tend to neglect the validation process. In particular, the fact that more than 80% of medical staff will almost unconditionally follow the AI's judgment when AI is more than 95% accurate shows how dangerous automation bias is.

The problem of automation bias can still arise even when healthcare AI is formally positioned as a secondary support tool. As medical professionals gain experience with the system’s generally accurate predictions, they may unconsciously begin transferring critical decision-making to the AI. This transition is shown symbolically in Figure 3.

The Shift of Responsibility in Healthcare AI

Figure 3: The Shift of Responsibility in Healthcare AI
A visual representation of how repeated reliance on accurate AI results can gradually lead medical professionals to defer responsibility, potentially increasing risk when AI misjudges a case.
[Illustrative AI-generated image]

Minimizing the Risk

Reducing the risk of automation bias requires a multi-dimensional response. First, as an institutional safeguard, there must be a legal process in place to ensure that physicians review and, when needed, correct the output of medical AI. Rather than simply following results, a structured verification system should be established.

Medical professionals must also clearly understand the limitations of AI. Ethical training should emphasize that AI supports, not replaces, human judgment. Healthcare AI is a complement to professional responsibility, not a substituteand ethical guidelines should reinforce this.

To ensure AI in medicine is trustworthy, explainable AI (XAI) techniques should be adopted to clarify how decisions are made. These help clinicians understand what information influenced the AI’s output. For example, if an AI detects pneumonia, a heatmap can show which parts of the lung contributed to the result. This helps verify whether the AI focused on medically relevant features or misleading artifacts.

However, explanations should not be accepted at face value. They are only one part of a larger decision-making process and must be interpreted critically by trained professionals. In addition to explainability, real-time monitoring systems are essential for ensuring fairness and stability in predictions. These systems can track anomalies and flag errors or biases in real time. For instance, if a model underrepresents women in heart disease datasets, it might assign lower risk scores than it should—monitoring systems can detect and correct such disparities.

These safeguards don’t replace medical expertise—they support it. Without careful human oversight, even technically sound tools can introduce risk or unfairness into patient care.

Bibliography

  1. Goddard, K., Roudsari, A., & Wyatt, J. C. (2016). Automation bias and verification complexity: a systematic review. Journal of the American Medical Informatics Association, 24(2), 423–431.
  2. Marcus, G. (2019). Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon.