Trump’s AI framework targets state laws, shifts child safety burden to parents Rebecca Bellan 9:14 AM PDT · March 20, 2026 The Trump administration on Friday laid out a legislative framework for a singular policy for AI in the United States. The framework would centralize power in Washington by preempting state AI laws, potentially undercutting the recent surge of efforts from states to regulate the use and development of the technology.
“This framework can only succeed if it is applied uniformly across the United States,” reads a White House statement on the framework. “A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.”
The framework outlines seven key objectives that prioritize innovation and scaling AI, and proposes a centralized federal approach that would override stricter state-level regulations. It places significant responsibility on parents for issues like child safety, and lays out relatively soft, nonbinding expectations for platform accountability.
For example, it says Congress should require AI companies to implement features that “reduce the risks of sexual exploitation and harm to minors,” but does not lay out any clear, enforceable requirements.
Trump’s framework comes three months after he signed an executive order directing federal agencies to challenge state AI laws. The order gave the Commerce Department 90 days to compile a list of “onerous” state AI laws, potentially risking states’ eligibility for federal funds like broadband grants. The agency has yet to publish that list.
The order also directed the administration to work with Congress on a uniform AI law. That vision is coming into focus, and it mirrors Trump’s earlier AI strategy , which focused less on guardrails and more on promoting companies’ growth.
The new framework proposes a “minimally burdensome national standard,” echoing the administration’s broader push to “remove outdated or unnecessary barriers to innovation” and accelerate AI adoptions across industries. This is a pro-growth, light-touch regulatory approach championed by “accelerationists,” one of whom is White House AI czar and venture capitalist David Sacks.
Techcrunch event Disrupt 2026: The tech ecosystem, all in one room Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400. Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately Offer ends March 13. San Francisco, CA | October 13-15, 2026 REGISTER NOW While the framework nods to federalism, the carve-outs for states are relatively narrow, preserving only their authority over general laws like fraud and child protection, zoning, and state use of AI. It draws a hard line against states regulating AI development itself, which it says is an “inherently interstate” issue tied to national security and foreign policy.
The framework also seeks to prevent states from “penaliz[ing] AI developers for a third party’s unlawful conduct involving their models” — a key liability shield for developers.
Missing from that framework are any gestures toward liability frameworks, independent oversight, or enforcement mechanisms for potential novel harms caused by AI. In effect, the framework would centralize AI policymaking in Washington while narrowing the space for states to act as early regulators of emerging risks.
Critics say states are the sandboxes of democracy and have been quicker to pass laws around emerging risks. Notably, New York’s RAISE Act and California’s SB-53 seek to ensure large AI companies have and adhere to safety protocols that are publicly documented.
“White House AI czar David Sacks continues to do the bidding of Big Tech at the expense of regular, hardworking Americans,” said Brendan Steinhauser, CEO of The Alliance for Secure AI. “This federal AI framework seeks to prevent states from legislating on AI and provides no path to accountability for AI developers for the harms caused by their products.”
Many in the AI industry are celebrating this direction because it gives them broader liberties to “innovate” without the threat of regulation.
“This framework is exactly what startups have been asking for: a clear national standard so they can build fast and scale,” Teresa Carlson, president of General Catalyst Institute, told TechCrunch. “Founders shouldn’t have to navigate a patchwork of conflicting state AI laws that impede innovation.”
The framework was issued at a moment when child safety has emerged as a central flashpoint in the debate over AI. Certain states have moved aggressively to pass laws aimed at protecting minors and placing more responsibility on tech companies. The administration’s proposal points in a different direction, placing greater emphasis on parental control than platform accountability.
___________________________________________________________________________________________________________
-- --
PLEASE LIKE IF YOU FOUND THIS HELPFUL TO SUPPORT OUR FORUM.
