1.1.3. Ethical and Legal Tensions in Emerging Technologies
Ethical and Legal Tensions in Emerging Technologies¶
Balancing Innovation and Ethics: The Role of Trustworthy AI¶
The rapid development of artificial intelligence (AI) is bringing far-reaching changes to human life and society. However, these technological innovations are also intensifying legal and ethical conflicts and causing social controversy.
As AI becomes more intertwined with our lives, the disagreement between civil society groups and technology companies is becoming more apparent. Civil society organizations warn of privacy violations, biased decision-making, and unpredictable risks, and call for stronger regulation and oversight. Technology companies, on the other hand, argue that excessive regulation could stifle innovation and weaken national competitiveness in the global AI race.
Beyond specific cases, AI has triggered broader tensions between innovation and societal values. These conflicts are not merely disputes between stakeholders; they also risk contributing to social instability. For example, if a country prioritizes technological development without adequate oversight, it may sacrifice public safety. Conversely, if regulations are too strict, they may lead to economic losses and diminished global influence.
Striking a balance between innovation and public safety is difficult, not because one is more important than the other, but because both are essential and often in tension. The challenge is to maximize the benefits of AI while minimizing the risks it can pose.
In this context, trustworthy AI acts as a bridge, not a compromise, between innovation and social responsibility. Without hindering progress, trustworthy AI provides a foundation for stability, public confidence, and ethical alignment.
To support trustworthy AI, international standards offer structured frameworks that developers, policymakers, and organizations can rely on. These standards are designed to guide the safe, ethical, and robust development of AI systems and are increasingly being adopted to build consistency and accountability in AI practices worldwide.
-
ISO/IEC 24028:2020 – “Trustworthiness in AI”
Defines key properties of trustworthy AI systems, including robustness, security, safety, and reliability. It offers conceptual guidance for ensuring AI operates reliably, resists adversarial manipulation, and avoids harm. -
ISO/IEC 22989:2022 – “AI Concepts and Terminology”
Provides standardized terminology for AI development and governance. By aligning definitions like “machine learning,” “training data,” and “bias,” it supports transparency and coordination across technical and policy domains. -
ISO/IEC 23894:2023 – “Risk Management for AI”
Focuses on managing risks across the AI lifecycle. It emphasizes identifying and mitigating harms, documenting decisions, and involving stakeholders to promote continuous monitoring and system accountability.
These international standards offer a way forward—not by choosing between innovation and regulation, but by ensuring both can coexist. Trustworthy AI, when grounded in legal, ethical, and technical guidance, becomes the foundation for building systems that are not only powerful but aligned with public interest.