Skip to content

1.2.2. Legal Trustworthiness

Legal trustworthiness is a prerequisite for AI systems to be socially adopted and institutionally reliable. It focuses on ensuring that AI systems not only perform technically, but also comply with existing laws, uphold human rights, and assign clear responsibilities when things go wrong.

A central component of legal trustworthiness is the distribution of accountability. When AI makes decisions that affect people’s rights, safety, or access to resources, it must be possible to clearly identify who is responsible. The 2018 Uber self-driving car crash in Arizona is a key example. In that case, a self-driving car struck and killed a pedestrian, but uncertainty over whether the responsibility fell on the car manufacturer, the AI developer, or the human safety driver led to legal ambiguity. This incident underscored the dangers of deploying high-risk AI without clearly defined liability standards.

As AI systems grow in influence, governments are strengthening regulations to prevent rights violations. The European Union’s AI Act, for example, requires that high-risk systems meet strict criteria in data handling, human oversight, transparency, and post-deployment monitoring. In one case, a European public institution faced sanctions under the General Data Protection Regulation (GDPR) for using AI-enabled surveillance cameras without adequate data protection safeguards. This illustrates how non-compliance can lead to reputational damage, regulatory fines, and the loss of public trust.

AI that lacks legal trustworthiness may achieve technical performance, but still fail to gain social acceptance. Without clear legal boundaries and responsible actors, even powerful systems become risky to deploy. Legal trust isn't only a developer's concern; it requires multi-stakeholder collaboration, policy alignment, and social dialogue.

To advance legal trustworthiness, countries and organizations must: - Define accountability roles across the AI lifecycle (from design to post-deployment) - Ensure compliance with laws like GDPR and forthcoming AI laws (e.g., EU AI Act) - Integrate regulatory checkpoints into development and deployment workflows

Ultimately, legal trust is not an obstacle to innovation, it is what makes innovation viable at scale. It enables AI to be adopted in socially sensitive areas like healthcare, transportation, and government services with confidence and legitimacy.