1.4.3. Step-by-step challenges and case studies to solve them
Step-by-step challenges and case studies to solve them¶
Lifecycle Case Studies: Trust in Action¶
The problems that arise in the development lifecycle of an AI system can be understood more clearly with practical examples that show how important it is to ensure trustworthiness at each stage.
ChatGPT: Safety Issues in the Planning Phase¶
In 2022, OpenAI launched ChatGPT, a conversational AI system that quickly gained attention for its innovative capabilities. However, it was initially criticized for user safety concerns. When malicious users asked inappropriate or harmful questions, the model often responded without constraints.
This case illustrates how overlooking ethical trustworthiness in the planning phase can lead to user harm and public backlash. In response, OpenAI made user safety a design priority by introducing guardrails and blocking mechanisms, while also incorporating feedback from external reviewers. This shows how early ethical review can support long-term trust.
Clearview AI: Legal Issues in the Data Collection Phase¶
In 2021, Clearview AI1 faced global criticism for collecting facial images from public websites and social media without user consent. Multiple European regulators concluded the company had violated the General Data Protection Regulation (GDPR) and issued bans and fines.
Although Clearview later introduced a consent process and adjusted its data policies, the damage to public trust and legal standing was already done. This case demonstrates how failure to ensure legal trustworthiness during data collection can jeopardize an AI system’s legitimacy and lead to reputational, regulatory, and financial consequences.
TikTok: The Problem of Bias in the Model Design Phase¶
In 2022, TikTok’s recommendation algorithm2 came under scrutiny for reinforcing filter bubbles. When users viewed several videos on a topic, the algorithm often saturated their feed with similar content, limiting exposure to diverse viewpoints.
To respond, TikTok introduced user controls over the recommendation feed and increased transparency around how the algorithm functions. These updates aimed to improve fairness and explainability in the model design phase and reflected an effort to rebuild trust.
These examples illustrate how each stage of the lifecycle carries distinct trustworthiness risks:
- ChatGPT showed what can happen when ethical foresight is absent at the planning stage but also how proactive reform can rebuild trust.
- Clearview AI demonstrated that ignoring legal safeguards in data collection has serious consequences.
- TikTok revealed the bias risks embedded in model design and how transparency features can help mitigate them.
Together, these cases reinforce a key principle: trustworthiness must be embedded early and revisited continuously throughout the AI lifecycle.
The lifecycle model offers a structured path for embedding trust into AI systems. But what happens when these steps are skipped—or misunderstood?
In this final section, we revisit three real-world examples where AI systems failed to meet legal, ethical, or stability expectations. These cases are not just stories of harm, they are warnings about what breaks down when trust isn’t built in from the start.
Quote
Bibliography¶
-
European Data Protection Board. (2021). EDPB fines Clearview AI for GDPR violations. https://edpb.europa.eu/news/news/2021/edpb-fines-clearview-ai_en ↩
-
Paul, K. (2022, July 6). TikTok under scrutiny for data practices and filter bubble algorithms. The Guardian. https://www.theguardian.com/technology/2022/jul/06/tiktok-algorithm-bias-filter-bubbles-privacy-complaints ↩
