Skip to content

Trustworthy AI : Foundation Level

This textbook consists of seven chapters, starting with an introduction to trustworthy AI and following the life cycle of an AI system.

1. Trustworthy AI system

1.1. If AI kills people, who is responsible?

1.2. Redefining AI Trustworthiness.

1.3. Ethical Principles and Standards for Trustworthiness.

1.4. Ensuring Trustworthiness Across the AI Development Lifecycle.

1.5. Analyzing AI LEGAL Issues Through the Lens of Trustworthiness.

2. Trustworthy Frameworks for AI Governance

2.1. The High-Stakes ConTest Over AI Power

2.2. AI Governance is Not New, But It’s Falling Short

2.3. AI Governance in Action: Global Strategies and Models

2.4. The Future of AI Governance: Adapting to a Rapidly Evolving AI Landscape

3. Managing Risk in AI Systems

3.1. When Perfection Fails: The Hidden Shocks in ‘Benchmark AI’

3.2. What Makes an AI “Technically Robust”?

3.3. When Oversight Fails: The Illusion of the Human-in-the-Loop

3.4. Who Designs the Failsafes?

4. Ensuring Data Quality and Ethical Governance

4.1. When Flawed Data Shapes Intelligent Decisions

4.2. How Bias Enters When Data Goes Unfiltered

4.3. What Makes a Dataset “Trustworthy”?

4.4. What Happens When a Dataset Starts to Decay—And No One Notices?

5. Responsible AI Model Development

5.1. Can You Trust a Model You Don’t Understand?

5.2. Can You Trace the Model’s Thinking—or Just Its Output?

5.3. Fairness Isn’t Always Fair

5.4. Deception by Design? Why You Can’t Always Trust What AI Says

6. Validating AI Systems at the Edge of Deployment

6.1. Can You Prove Your AI Is Ready to Deploy?

6.2. What Really Breaks at Deployment?

6.3. How Do You Build in Control Before It’s Too Late?

7. Monitoring AI Operations and Maintaining Trust

7.1. The System Knew Something Was Wrong, Why Didn’t Anyone Stop It?

7.2. Can Humans See Enough to Intervene?

7.3. How Do You Keep the System Aligned After Launch?