1.4.2. Application of three trustworthy categories (ethical, legal, and stable) at each stage of the life cycle
Application of three trustworthy categories (ethical, legal, and stable) at each stage of the life cycle¶
Trustworthiness Across the AI Lifecycle¶
Each stage of the AI development lifecycle provides an opportunity to implement the trustworthy categories of ethical, legal, and stable practices. By understanding how these categories are applied, we can see that trust in AI systems extends beyond technical reliability to meeting broader societal expectations.
The table below outlines how these trustworthiness categories can be practically embedded into each phase of the AI lifecycle.
Table 3: Applying Trustworthiness Categories Across AI Lifecycle Stages
| Life Cycle Stages | Ethical Trustworthiness | Legal trustworthiness | Stability |
|---|---|---|---|
| Planning Phase | Human-Centered and Fair Goals Review. UN Guidelines: Designing to Prevent Worsening Social Inequalities |
Clear data usage plan considering data regulations such as GDPR | Identify anticipated technical risks and create a mitigation plan |
| Data collection and management phases | Get representative data from diverse populations. Eliminate bias with the IBM AI Fairness 360 toolkit | Privacy Regulation Compliance. Ensuring Data Traceability and Explainability (EU AI Act) |
Maintain data integrity. Prevent loss and tampering Technical Measures |
| Model Design Phase | Design mechanisms to ensure algorithmic fairness. Analyze model bias and simulate fairness with the Google What-If Tool |
Meet regulatory requirements. Prepare design documentation Gain external verifiability |
Robust against hostile attacks, securing stability in high-risk areas such as finance |
| Evaluation and Validation Phase | Test performance in a variety of environments to ensure that it is not disadvantageous to a particular population |
Document test procedures and archive results for compliance with regulatory requirements |
Validate response to unexpected input/environment changes |
| Deployment and monitoring phases | Reflecting user feedback and continuing to evaluate ethical issues | EU AI Act: Post-Deployment Performance/Safety Monitoring and Addressing Emerging Risks |
Real-time performance tracking. Utilize anomaly detection monitoring tools (e.g., financial AI fraud transaction detection) |
As shown, each lifecycle phase plays a distinct role in implementing ethical, legal, and stability safeguards. When applied systematically, these categories help AI systems go beyond technical goals and build trustworthiness in both function and public perception.
TRAI Challenges
Embedding Trust in the AI LifecycleScenario:
Your team is designing an AI system for predicting job applicant success. You are in the early planning stages and need to create a trust map that outlines potential risks and safeguards across the AI lifecycle.
Task:
Using the five standard lifecycle stages below, do the following:
1. Identify one risk to trustworthy AI at each stage.
2. Classify the risk by trust pillar (Ethical, Legal, Stable).
3. Suggest a strategy or control to mitigate that risk.
Instructions:
- Complete the table below with clear, concise entries (1–2 sentences per cell).
- Base your answers on the concepts and case studies discussed earlier in the chapter.
- You may complete this table on a separate worksheet or digital notebook to reflect on how each lifecycle stage can protect against harm.
| Lifecycle Stage | Identified Risk | Trust Pillar (Ethical/Legal/Stable) | Mitigation Strategy |
|---|---|---|---|
| Planning and Design | |||
| Data Collection & Preprocessing | |||
| Model Training | |||
| Deployment and Integration | |||
| Monitoring and Feedback |