For the archetypal time, Washington is getting adjacent to deciding however to modulate artificial intelligence. And the combat that’s brewing isn’t astir the technology, it’s astir who gets to bash the regulating.
In the lack of a meaningful national AI modular that focuses connected user safety, states person introduced dozens of bills to support residents against AI-related harms, including California’s AI information measure SB-53 and Texas’s Responsible AI Governance Act, which prohibits intentional misuse of AI systems.
The tech giants and buzzy startups calved retired of Silicon Valley reason specified laws make an unworkable patchwork that threatens innovation.
“It’s going to dilatory america successful the contention against China,” Josh Vlasto, co-founder of pro-AI PAC Leading the Future, told TechCrunch.
The industry, and respective of its transplants successful the White House, is pushing for a nationalist modular oregon nary astatine all. In the trenches of that all-or-nothing battle, caller efforts person emerged to prohibit states from enacting their ain AI legislation.
House lawmakers are reportedly trying to usage the National Defense Authorization Act (NDAA) to artifact authorities AI laws. At the aforesaid time, a leaked draught of a White House enforcement bid besides demonstrates beardown enactment for preempting authorities efforts to modulate AI.
A sweeping preemption that would instrumentality distant states’ rights to modulate AI is unpopular successful Congress, which voted overwhelmingly against a similar moratorium earlier this year. Lawmakers person argued that without a national modular successful place, blocking states volition permission consumers exposed to harm, and tech companies escaped to run without oversight.
Techcrunch event
San Francisco | October 13-15, 2026
To make that nationalist standard, Rep. Ted Lieu (D-CA) and the bipartisan House AI Task Force are preparing a bundle of national AI bills that screen a scope of user protections, including fraud, healthcare, transparency, kid safety, and catastrophic risk. A megabill specified arsenic this volition apt instrumentality months, if not years, to go law, underscoring wherefore the existent unreserved to bounds authorities authorization has go 1 of the astir contentious fights successful AI policy.
The conflict lines: NDAA and the EO
Trump displays an enforcement bid connected AI helium signed connected July 23, 2025. (Photo by ANDREW CABALLERO-REYNOLDS / AFP) Image Credits:ANDREW CABALLERO-REYNOLDS/AFP / Getty ImagesEfforts to artifact states from regulating AI person ramped up successful caller weeks.
The House has considered tucking connection successful the NDAA that would forestall states from regulating AI, Majority Leader Steve Scalise (R-LA) told Punchbowl News. Congress was reportedly moving to finalize a woody connected the defence measure earlier Thanksgiving, Politico reported. A root acquainted with the substance told TechCrunch negotiations person focused connected narrowing the scope to perchance sphere authorities authorization implicit areas similar kids’ information and transparency.
Meanwhile, a leaked White House EO draught reveals the administration’s ain imaginable preemption strategy. The EO, which has reportedly been enactment connected hold, would make an “AI Litigation Task Force” to situation authorities AI laws successful court, nonstop agencies to measure authorities laws deemed “onerous,” and propulsion the Federal Communications Commission and Federal Trade Commission towards nationalist standards that override authorities rules.
Notably, the EO would springiness David Sacks – Trump’s AI and Crypto Czar and co-founder of VC steadfast Craft Ventures – co-lead authorization connected creating a azygous ineligible framework. This would springiness Sacks nonstop power implicit AI argumentation that supersedes the emblematic relation of the White House Office of Science and Technology Policy, and its caput Michael Kratsios.
Sacks has publically advocated for blocking authorities regularisation and keeping national oversight menial, favoring manufacture self-regulation to “maximize growth.”
The patchwork argument
Sacks’s presumption mirrors the viewpoint of overmuch of the AI industry. Several pro-AI ace PACs person emerged successful caller months, throwing hundreds of millions of dollars into section and authorities elections to reason candidates who enactment AI regulation.
Leading the Future – backed by Andreessen Horowitz, OpenAI president Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale – has raised much than $100 million. This week, Leading the Future launched a $10 cardinal campaign pushing Congress to trade a nationalist AI argumentation that overrides authorities laws.
“When you’re trying to thrust innovation successful the tech sector, you can’t person a concern wherever each these laws support popping up from radical who don’t needfully person the method expertise,” Vlasto told TechCrunch.
He argued that a patchwork of authorities regulations volition “slow america successful the contention against China.”
Nathan Leamer, enforcement manager of Build American AI, the PAC’s advocacy arm, confirmed the radical supports preemption without AI-specific national user protections successful place. Leamer argued that existing laws, similar those addressing fraud oregon merchandise liability, are capable to grip AI harms. Where authorities laws often question to forestall problems earlier they arise, Leamer favors a much reactive approach: fto companies determination fast, code problems successful tribunal later.
No preemption without representation
Alex Bores speaking astatine an lawsuit successful Washington, D.C., connected November 17, 2025. Image Credits:TechCrunchAlex Bores, a New York Assembly subordinate moving for Congress, is 1 of Leading the Future’s archetypal targets. He sponsored the RAISE Act, which requires ample AI labs to person information plans to forestall captious harms.
“I judge successful the powerfulness of AI, and that is wherefore it is truthful important to person tenable regulations,” Bores told TechCrunch. “Ultimately, the AI that’s going to triumph successful the marketplace is going to beryllium trustworthy AI, and often the marketplace undervalues oregon puts mediocre short-term incentives connected investing successful safety.”
Bores supports a nationalist AI policy, but argues states tin determination faster to code emerging risks.
And it’s existent that states determination quicker.
As of November 2025, 38 states person adopted much than 100 AI-related laws this year, chiefly targeting deepfakes, transparency and disclosure, and authorities usage of AI. (A caller study recovered that 69% of those laws enforce nary requirements connected AI developers astatine all.)
Activity successful Congress provides much grounds of the slower-than-states argument. Hundreds of AI bills person been introduced, but fewer person passed. Since 2015, Rep. Lieu has introduced 67 bills to the House Science Committee. Only 1 became law.
More than 200 lawmakers signed an open missive opposing preemption successful the NDAA, arguing that “states service arsenic laboratories of democracies” that indispensable “retain the flexibility to face caller integer challenges arsenic they arise.” Nearly 40 authorities attorneys wide also sent an unfastened letter opposing a authorities AI regularisation ban.
Cybersecurity adept Bruce Schneier and information idiosyncratic Nathan E. Sanders – authors of Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship – reason the patchwork ailment is overblown.
AI companies already comply with tougher EU regulations, they note, and astir industries find a mode to run nether varying authorities laws. The existent motive, they say, is avoiding accountability.
What could a national modular look like?
Lieu is drafting an implicit 200-page megabill helium hopes to present successful December. It covers a scope of issues, similar fraud penalties, deepfake protections, whistleblower protections, compute resources for academia, and mandatory investigating and disclosure for ample connection exemplary companies.
That past proviso would necessitate AI labs to trial their models and people results – thing astir bash voluntarily now. Lieu hasn’t yet introduced the bill, but helium said it doesn’t nonstop immoderate national agencies to reappraisal AI models directly. That differs from a akin bill introduced by Sens Josh Hawley (R-MS) and Richard Blumenthal (D-CN) which would necessitate a government-run valuation programme for precocious AI systems earlier they deployed.
Lieu acknowledged his measure wouldn’t beryllium arsenic strict, but helium said it had a amended accidental astatine making it into law.
“My extremity is to get thing into instrumentality this term,” Lieu said, noting that House Majority Leader Scalise is openly hostile to AI regulation. “I’m not penning a measure that I’d person if I were king. I’m trying to constitute a measure that could walk a Republican-controlled House, a Republican-controlled Senate, and a Republican-controlled White House.”















English (US) ·