Wrap Up

This chapter has shown that the promise of AI cannot be separated from the risks it poses—especially when it operates without ethical foresight, legal clarity, or technical stability. Through real-world examples from healthcare, hiring, autonomous vehicles, and social media, we’ve seen that AI systems can cause harm when trustworthiness is treated as an afterthought rather than a design requirement.

Each failure—whether it was Amazon’s biased hiring algorithm, Uber’s fatal AV incident, TikTok’s recommendation bias, or medical misdiagnosis—tells the same story: AI systems don’t just reflect society’s values; they amplify them. If those values are biased, opaque, or unsafe, so too will be the outcomes.

To move forward, we must embed trustworthiness throughout the AI development lifecycle from the earliest planning stages to post-deployment monitoring. This means:

  • Prioritizing ethical fairness through inclusive design and transparency
  • Ensuring legal accountability with enforceable standards and role clarity
  • Maintaining technical stability through robust testing and real-world validation

But trust isn’t just a technical problem. It is a governance problem.

In Chapter 2, we shift focus from why trust matters to how it can be governed. We will explore what trustworthy AI means in the context of law, policy, regulation, and oversight and how ethical, legal, and stable AI must be supported by enforceable governance frameworks.

Points to remember

  • The trustworthy of AI falls into three main categories: ethical, legal, and stable, which provide a key framework for understanding the impact of technology on our lives.
  • The ethical principles emphasize respect for diversity and a human-centered approach, while the legal standards clearly demonstrate the need for the distribution of responsibilities and safety verification. These principles and standards are important factors that must be considered for AI technology to be socially accepted and developed.
  • UNESCO, the United Nations, and the European Union have proposed ethical principles that emphasize fairness, respect for diversity, and elimination of data bias in AI, and set standards for AI trustworthiness.
  • The EU AI Act seeks to ensure the trustworthiness of high-risk AI systems through requirements including data management, transparency, human oversight, and safety.
  • In order to ensure ethical trustworthy, AI requires a design approach that respects diversity and prioritizes human values and safety.
  • International ethical principles and regulations play an important role in ensuring the trustworthy of AI technology and making it socially acceptable.
  • Important role in ensuring the trustworthy of AI technology and making it socially acceptable.