Since May, Congress has been debating an unprecedented proposal: a 10-year moratorium that would eliminate virtually all state and local AI policies across the nation. This provision, tucked into the “One Big Beautiful Bill,” would prohibit states from enacting or enforcing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for the next decade.
It’s not clear what version of the moratorium, if any, will become law. The House sent the One Big Beautiful Bill to the Senate’s Commerce Committee, where the moratorium has been subject to an ongoing debate and numerous revisions. The latest public Senate text — which could be voted on as early as Friday — ties the prohibition to the “Broadband Equity, Access, and Deployment” (BEAD) program, threatening to withhold billions of dollars in federal funds to expand broadband from states that choose to regulate AI.
The provision’s language may still shift ahead of the Senate’s final vote. Once approved there, the bill must pass the House, receive President Trump’s signature, and then survive inevitable lawsuits from states claiming it’s unconstitutional.
But whatever happens to this provision, the momentum to remove regulatory barriers on AI will persist. Amazon, Meta, Microsoft, and Google will continue to lobby for the laxest legislation possible, or none at all, now that such a move has entered the mainstream. It’s time to seriously consider the consequences of a federal moratorium.
If Congress enacts this provision — or a similar one — it will grant dramatic power to the creators of a new and largely untested technology. The moratorium will halt state efforts to protect children from AI harms, hold developers accountable for algorithmic discrimination, and encourage transparency in the development and use of AI — all without supplying any federal standards in their place.
There is significant confusion over the exact effect of the current Senate language, including whether it would still prohibit AI regulation in states that forego BEAD funds. But even if the only effect of the moratorium is to revoke these funds, it will be a major roadblock for states considering AI regulation. The $42 billion BEAD program is the largest federal broadband investment to date, providing 56 states and territories with grants to help provide “communities of color, lower-income areas, and rural areas” with broadband access. If states are forced to choose between pursuing AI legislation or receiving broadband funding, many will justifiably prioritize the crucial funds over protecting their constituents from AI harms.
That said, there are early signs that some states could undermine the moratorium’s national impact by choosing not to comply. A press release announcing the California Report on Frontier AI Policy, for example, was titled “As Trump moves to decimate state AI laws, Governor Newsom taps the nation’s top experts for groundbreaking AI report.” And with several senators reportedly pushing to tie compliance to smaller pots of money, the leverage of the moratorium could further ebb.
Supporters of the moratorium argue that the growing patchwork of state laws regulating AI will make it difficult for companies to conduct business across state lines — forcing them to focus on compliance rather than innovation. As a result, America risks falling behind in the AI race with China.
Divergent state policies can indeed create compliance costs for companies. For instance, Colorado and Utah list different disclosure requirements in their AI policies, so companies need to have separate compliance processes for each state. This isn’t a prohibitive challenge for big companies with robust legal teams, but for startups, the relative cost is much higher. Some states, like New York, have already addressed this concern by limiting their proposals so they apply only to the largest frontier AI models. Importantly, there are no clear instances, to our knowledge, in which it is impossible for a company to comply with two different states’ AI laws at the same time. The additive burden may be inefficient and costly, but right now, divergent laws do not prevent cross-border operations.
Acknowledging the burden that divergent state rules can place on companies — especially smaller firms — does not necessitate support for a moratorium. Thoughtful state-level regulation can minimize these burdens and spur broader technology adoption by building public trust. In contrast, a regulatory void risks allowing serious harms to go unchecked, which in turn fuels suspicion of an emerging technology.
The proposed moratorium is a prohibition, not preemption — one for which there is little historical precedent. Traditional federal preemption replaces state laws with federal standards. Here, the federal government would be knocking out virtually all state and local laws without a federal law to fill the void it creates.
Want to contribute to the conversation? Pitch your piece.
Some commentators have drawn a parallel to the Internet Tax Freedom Act of 1998, which imposed a three-year ban on state and local taxes on internet transactions. But the comparison doesn’t hold up: the ITFA addressed a narrow taxation issue, while the moratorium would prohibit state regulation of an entire sector. Even Big Tobacco, when it asked Congress to preempt state regulation of smoking in 1965, realized it would need some light-touch federal legislation in the form of warning labels.
Additionally, the proposed law violates the federalist principle that allows states — the laboratories of democracy — to experiment with policymaking. Republicans have long been champions of the federalist approach, which often helps motivate and improve subsequent federal legislation. In the case of deepfake pornography, for instance, state action preceded a federal bill. Given the novelty of AI, a state-based approach that allows for experimentation and buys the federal government time to regulate is particularly well-suited for this moment. The examples of social media and data privacy are noteworthy: since the federal government failed to regulate, states have stepped up, with Republican states taking the lead.
/odw-inline-subscribe-cta
Finally, the process behind the moratorium has been rushed, given the stakes of the enormously consequential technology. Without hearings, public input, or sufficient expert feedback, the drafting of this language has taken place over only a couple of months and under the cover of “must-pass” budget legislation. While big tech companies are now advocating for the moratorium, they had not called for such a move in their public responses to the White House’s Request for Information to shape its AI Action Plan.
That the process of drafting the federal moratorium has been so rushed and inadequate adds insult to the injury of terminating the active policy experiments happening at the state level.
States have already begun enacting legislation to mitigate the harms caused by AI. These efforts tend to fall into four broad categories: AI bills of rights (codifying state residents’ rights when interacting with AI), laws addressing algorithmic discrimination, rules governing automated decision making, and laws that create working groups to study AI implementation and possible regulation. The moratorium would threaten all except the last category.
There are a few exceptions specified in the text of the Senate provision. The first two suggest that any legislation that removes barriers to AI deployment or streamlines procedures to aid AI adoption will still be enforceable. This is in line with the purpose of the moratorium: accelerate AI development and deployment across the country. Another exception states that general laws that apply equally to non-AI systems will remain acceptable.
While the exact scope of the bill is still being debated, it will likely impact thousands of state AI laws addressing children’s online safety, the transparency and accountability of models, and government AI adoption and deployment. A few examples:
Tennessee’s “Ensuring Likeness, Voice, and Image Security (ELVIS) Act,” sponsored by State Senator Jack Johnson (R), will likely be voided, since it expands Tennessee’s right of publicity to prohibit the unauthorized use of an individual’s name, likeness, voice, or image in AI-generated content.
Utah’s Senate Bill 149, which generally promotes AI deployment, might also be scrapped, due to its requiring certain disclosures about AI use and establishing civil liability for AI-related consumer protection violations.
Related: The Case for AI Liability
New York’s “Stop Addictive Feeds Exploitation (SAFE) for Kids Act,” which bans “addictive” social media algorithms, could also be blocked because of its definition of algorithmic feeds, despite widespread support. This would come despite Republicans in Congress repeatedly raising online protection of kids as an issue.
Wisconsin’s 2023 law requiring disclosures in political advertisements that contain AI-generated content would likely be eliminated, too.
The Senate parliamentarian has cleared the way for the provision despite bipartisan opposition. Senator Hawley (R-MO), for instance, said he would “do everything [he] can to kill it,” and has signaled that he will offer an amendment to challenge it on the floor. The House Freedom Caucus, a group of the most conservative Republicans, has also called for the moratorium to be removed from the bill once it returns to the House for final approval.
Opponents of the moratorium are justifiably concerned about its duration. Ten years is a long time, especially with a technology evolving as rapidly as AI. Anthropic CEO Dario Amodei recently predicted that AI tools could eliminate half of white-collar, entry-level jobs, bringing the unemployment rate up to 20% in the next one to five years.
Policy innovation should at least attempt to keep pace with this technological transformation. While the federal regulatory apparatus cannot move at startup speed, states can afford to be nimble. Suspending state action for ten years and expecting the federal government to be responsive to a rapidly changing environment would be a high-risk move.
The proposal is also broadly unpopular with US voters. According to recent polling, 73% of US voters want both states and the federal government to regulate AI. These opinions are shared roughly equally by Democrats and Republicans.
If the moratorium does get passed, its long-term survival seems shaky. Attorneys general from 40 states have spoken out against the moratorium, asserting their responsibility to defend their constituents. States could choose to sue under the Tenth Amendment, arguing that Congress cannot strip their police powers without a genuine federal regulatory regime in place. Others may contest the constitutionality of the BEAD linkage. In the likely event that these officials took legal action against the moratorium, their cases could eventually be determined by a heavily conservative Supreme Court that might look unfavorably on an encroachment on states’ rights.
While a 10-year moratorium is the wrong approach, there are real issues with the current patchwork of conflicting and inconsistent state AI laws.
One way to minimize the costs of a patchwork while avoiding a moratorium is to focus on areas where there is widespread agreement. For instance, both those concerned with AI risks and opponents of state regulations agree that private companies should disclose key information about model capabilities and safety practices.
Many AI labs have voluntarily committed to such policies, though some have since scaled those commitments back. Enforcing transparency would equip the public and policymakers with the information needed to make informed decisions about how to regulate AI. Whistleblower protections for employees of AI companies who sound the alarm about dangerous vulnerabilities or safety gaps also draw bipartisan support.
Knowing what is happening inside AI labs is essential to regulating them effectively. We should learn from our experience with social media companies. By the time whistleblowers spoke out and Congress heard testimony about policies that subjected children to sexual exploitation on social media platforms, serious harm had already been caused. We’ll be repeating the same mistake with AI companies unless we act soon.
Finally, opposing a moratorium doesn’t mean we should grant states carte blanche over AI policy. States could pass legislation that puts US competitiveness at risk. No state has done this yet, nor is any state on the verge of doing so. Congress could move to reduce this risk in a more targeted and considered way than the rushed proposal to prohibit all state AI laws. In the meantime, Congress could better inform itself by doing the real legislative work of holding hearings, issuing reports, and talking to governors and legislators in different states. Some sort of preemption bill might emerge from such a process. But it's unlikely to look like this one.
See things differently? AI Frontiers welcomes expert insights, thoughtful critiques, and fresh perspectives. Send us your pitch.
Placing AI in a nuclear framework inflates expectations and distracts from practical, sector-specific governance.