1.3.2. Why Diversity and Human-Centered Design Matter in AI Trust
Why Diversity and Human-Centered Design Matter in AI Trust¶
Applying Ethical Principles: From Values to Implementation¶
The following examples illustrate how applying ethical principles such as diversity, human-centered design, and fairness can meaningfully increase AI trustworthiness. When applied effectively, these principles help ensure AI systems are not only efficient but also responsible, inclusive, and socially aligned.
Impact of Diversity on Trustworthiness¶
To treat diverse populations fairly, AI systems must ensure representation and inclusivity during both data collection and algorithm design. In 2020, a major U.S. financial institution faced public criticism for using an AI-based loan underwriting system that produced biased outcomes against certain racial groups. The model, trained on historical data, repeatedly generated unequal decisions—illustrating how systems that overlook fairness ultimately lose public trust. The institution responded by augmenting data sources and revising algorithmic structures to reduce embedded bias.
Impact of Human-Centered Design on Trustworthiness¶
AI systems must be designed to prioritize human safety and well-being. In 2019, a healthcare analytics AI was criticized for allocating care based on cost efficiency rather than patient urgency—causing vulnerable patients to be deprioritized. The lack of a human-centered design led to reputational damage and policy changes. The developers later introduced new algorithms that prioritized medical necessity over cost, reflecting a shift toward ethical and value-aligned development.
How Global Tech Companies Operationalize Ethical AI¶
Google’s AI Fairness initiative shows how private sector firms can turn ethical principles into product-level action. Through tools like Fairness Indicators and the What-If Tool, developers can examine model behavior across demographics and test fairness in different conditions. These tools help turn abstract principles—like non-discrimination and transparency—into measurable system traits.
How Countries Define Ethical AI¶
Many governments have issued national AI ethics guidelines that promote fairness, transparency, accountability, and public benefit. While these frameworks are not always legally binding, they strongly influence public policy, procurement standards, and private-sector development.
Table 2 summarizes the ethical priorities of four leading countries/regions—revealing both shared commitments and distinct national approaches.
Table 2: Comparative Ethics Principles in the EU, Korea, Japan, and China
| EU | Korea | Japan | China |
|---|---|---|---|
| Technical robustness and safety | Safety | Ensuring Security | Protection of Privacy and Security |
| Privacy and data governance | Protection of privacy Data Management |
Privacy Protection | — |
| Human agency and oversight | — | Human-Centric | Assurance of Controllability and Trustworthiness |
| Transparency | Transparency | Fairness, Accountability, and Transparency | Strengthening of Accountability |
| Accountability | Accountability | — | Strengthening of Accountability (also aligns with Transparency) |
| Diversity, non-discrimination and fairness | Guarantee of Human Rights Respect for Diversity |
— | Promotion of Fairness and Justice |
| Environmental and societal well-being | Prohibition of Infringement Public Good Solidarity |
— | Improvements to the cultivation of ethics Advancement of Human Welfare |
| — | — | Education/Literacy Fair Competition Innovation |
— |
Together, these cases and policies show that trustworthy AI begins with principled design—and scales through structured action. Ethical frameworks are no longer theoretical; they are becoming operational guidelines for developers, institutions, and governments worldwide.