2.3.2. The U.S. Approach- Innovation-Driven but Fragmented
The U.S. Approach: Innovation-Driven but Fragmented¶
In contrast to the EU’s centralized legal structure, the United States employs a sectoral, decentralized governance model. Unlike the EU, there is no unified AI law at the federal level. Instead, responsibility is distributed across agencies based on domain authority (e.g., healthcare, finance, commerce), with no single body empowered to regulate AI comprehensively.
AI oversight emerges from a patchwork of agency-level guidance, executive orders, voluntary frameworks, and state-led initiatives. This bottom-up, innovation-first approach reflects the U.S. emphasis on market freedom and technological leadership.
Key Instruments:
-
AI Bill of Rights (2022): A non-binding policy framework that outlines high-level principles such as privacy, fairness, explainability, and protection from algorithmic discrimination. It carries no legal obligation and functions primarily as a normative guide.
-
Agency Oversight: Institutions like the Federal Trade Commission (FTC) and Department of Health and Human Services (HHS) can investigate deceptive, harmful, or discriminatory AI practices but only within their legal purview.
-
NIST AI Risk Management Framework: Offers a technical toolkit for voluntary AI risk assessment and mitigation.
Governance Impact¶
The U.S. model is flexible and innovation-oriented, enabling rapid deployment of new AI products, especially in the private sector. t fosters public-private collaboration in research, infrastructure, and standards development.
However, without centralized governance, accountability is difficult to assign. Enforcement varies by sector, leading to uneven protections and inconsistent expectations. As a result, reactive governance often replaces proactive regulation.
Criticisms or Trade-offs 1
-
No binding federal accountability mechanisms currently exist for AI developers. Oversight is fragmented across sector-specific agencies without centralized governance.
-
Overlapping state laws (e.g., California Consumer Privacy Act vs. Texas facial recognition laws) create inconsistent compliance requirements for multi-jurisdictional AI systems.
-
Algorithmic discrimination concerns have been raised by civil society, particularly in domains such as hiring, policing, and healthcare—yet no enforceable legal remedies are consistently available.
-
Transparency is not mandatory, and many AI systems operate without clear documentation, public auditability, or user explanation rights.
-
The U.S. model relies heavily on voluntary compliance, internal ethical guidelines, and self-regulation, which may not adequately protect marginalized communities or ensure systemic fairness.
Global Relevance
The U.S. model sets the tone for technological competitiveness and AI export leadership, particularly in foundational models and cloud-based deployment. It influences standards bodies like IEEE and ISO, but lags behind the EU in defining legal accountability.
Other countries look to the U.S. for innovation policy not governance but the domestic conversation is shifting. Legislative efforts (like the Algorithmic Accountability Act) signal growing interest in formalizing AI oversight at the federal level.
Bibliography¶
-
White House OSTP (2022). “Blueprint for an AI Bill of Rights.” https://www.whitehouse.gov/ostp/ai-bill-of-rights ↩