Joined by the Seam or a Thread?
- WAI CONTENT TEAM

- 16 hours ago
- 5 min read
Understanding America’s Patchwork Approach to AI Regulation

By Amaka Ozobia
As AI transforms how we gather and use information in our professional and personal lives, protecting privacy, data and public safety is vital. Failing to comply with national and international legal standards can result in grave consequences. Revealing private information can lead to hefty regulatory fines, significantly damage public trust in AI, harm organizational integrity, and cost millions in lost revenue. The delay in a comprehensive AI Act reflects a struggle to balance a unified national framework with the need to encourage innovation and mitigate security risks amid global competition.
This article is written by Amaka Ozobia. Amaka is an AI governance attorney specializing in legal data analysis and trust and safety. Her career spans experience in the federal, non-profit, and private sectors. She is dedicated to promoting AI systems that are accessible, mitigate risks, and enhance user safety. Additionally, she is passionate about working with other like-minded individuals, such as Women In AI, to promote greater inclusion and representation in the field.
Joined by the Seam or a Thread? Understanding America’s Patchwork Approach to AI Regulation
Applique, backstitch, batting, chain piecing, interfacing, and jelly roll. What comes to mind? I never thought to use quilting terms to describe AI regulation, but the comparison fits. The United States still lacks a single, comprehensive "AI Act." Instead, AI is governed by a patchwork of:
1. Existing federal laws (consumer protection, civil rights, financial, health, communications)
2. Agency guidance and enforcement
3. An expanding list of state AI laws.
As AI transforms how we gather and use information in our professional and personal lives, protecting privacy, data, and public safety is vital. Failing to comply with national and international legal standards can result in grave consequences. Revealing private information can lead to hefty regulatory fines, significantly damage public trust in AI, harm organizational integrity, and cost millions in lost revenue. The delay in a comprehensive AI Act reflects a struggle to balance a unified national framework with the need to encourage innovation and mitigate security risks amid global competition.
AI Governance and Regulation in the United States
The federal government establishes AI policy and guidance through the National Artificial Intelligence Initiative Act of 2020 (NAII), which coordinates AI research, funding, and inter-agency collaboration. A December 2025 Executive Order called for the development of a national AI policy to address inconsistent state laws. The order enacted several key measures. Examples include 1. Establishing an AI Litigation Taskforce 2. Requiring a federal review of all state AI laws 3. Linking federal funding to states that are non-compliant or have laws seen as burdensome and conflicting with the federal government’s goal to promote “a minimally burdensome national policy framework for AI" (Executive Order, Dec. 2025).
Recently, on March 20, 2026, the White House released its proposal for a comprehensive national legislative framework. The new framework retains the overarching goal of the December 2025 Executive Order to promote industry innovation and retain states’ rights to protect vulnerable populations like children, prevent fraud, and develop AI infrastructure. However, some experts have expressed concerns about the language directing Congress to ‘preempt state laws’ that are unduly burdensome and how states may be impacted in the long term.
Federal Agencies and AI Regulation
As another important part of AI regulation, federal agencies play a key role in overseeing artificial intelligence and reducing potential harms and security risks. For example, the Federal Trade Commission (FTC) works to protect consumers from AI-related harms using its existing authority under Section 5, which bans “unfair or deceptive acts or practices” (15 U.S.C. § 45). The FTC can enforce actions against AI harms to consumers by applying this ‘unfair or deceptive’ clause. For instance, the FTC has used Section 5 to shield consumers from ‘AI Washing’—a deceptive marketing tactic where bad actors or companies may overstate or exaggerate AI capabilities to deceive, defraud, or mislead consumers. Other possible violations of the FTC Act related to AI misuse include:
deploying biased algorithms that harm consumers
failing to protect data used to train AI systems
using automated AI systems that harm consumers
This has led to the enforcement against companies using AI systems for violating consumer protection laws.
The FDA: Healthcare AI and Medical Devices
Both the Equal Employment Opportunity Commission and the Department of Justice enforce the use of AI in employment. Increasingly, AI is used in screening applicants, candidate scoring for ranking prospective hires, as well as productivity monitoring for employees. Federal statutes, Title VII of the Civil Rights Act of 1964 and the Americans with Disabilities Act, prohibit discriminatory practices against individuals based on protected characteristics. Hiring tools and software that utilize AI may be found to violate civil rights law if such tools:
exclude protected groups disproportionately
fail to accommodate disabilities
use and rely upon biased training data
Employers can be held liable for discriminatory practices resulting from AI systems. When using AI, employers remain responsible for providing reasonable accommodations to those who qualify, in order to remain in compliance with existing laws.
AI and the SEC
Artificial intelligence involves using complex algorithms and intricate predictive modeling techniques. This is one of the major reasons why AI is widely used within the finance industry. AI is commonly applied to automated trading, predictive analytics, financial or investment advice, and portfolio management. The two main statutes governing the SEC and its activities are: 1. The Securities Exchange Act of 1934, and 2. The Investment Advisers Act of 1940. These laws were enacted to promote fair dealing, reduce securities fraud, and protect investors. In 2023, the SEC proposed a rule to regulate AI and related practices involving predictive analytics, due to potential conflicts of interest. However, this rule was withdrawn in June 2025, and as of this writing, how the SEC will address AI remains unresolved.
Advantages and Challenges of a Patchwork Model
Although the US approach is fragmented, one of its key strengths is that federal and state agencies can regulate AI using existing legal authority. Experts in specific fields can easily assess AI in their areas, enabling flexibility, adaptability, and quick responses to new risks. Conversely, when regulation is fragmented, it can create compliance challenges for companies, including higher costs from conflicting rules, inconsistent consumer protections, and greater vulnerabilities in data privacy and security.
AI Regulation is Already Here
While the United States lacks a single comprehensive federal AI law, it actively regulates AI through various statutes and agencies. Areas such as consumer protection, civil rights, medical device regulation, and financial oversight continue to shape how AI systems are used and implemented. Although fragmented, this patchwork creates a dynamic regulatory landscape that supports AI governance in the U.S. Only time will tell how the recent proposal for a national AI framework will lead to a unified AI legislation.
_____________________________________________________________
Collaborate with us!
As always, we appreciate you taking the time to read our blog post.
If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog in 2026! To explore this opportunity, please contact WAI editors Silvia A. Carretta - WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co) or Dina Blikshteyn (dina@womeninai.co).
Silvia A. Carretta and Dina Blikshteyn
- Editors




Comments