Example Image
Civitas Outlook
Topic
Politics
Published on
Dec 16, 2025
Contributors
Kevin Frazier
Shutterstock

Beyond the Border: How Extraterritorial State Laws Risk America’s AI Ambitions

Contributors
Kevin Frazier
Kevin Frazier
Kevin Frazier
Summary
The innovative capacity of American AI risks being confined not by technological limits, but by the artificial, costly, and constitutionally questionable borders of state law.

Summary
The innovative capacity of American AI risks being confined not by technological limits, but by the artificial, costly, and constitutionally questionable borders of state law.

Listen to this article

I drove from Miami, Florida, to Austin, Texas this summer. During my crisscrossing of several states, I adjusted to their specific speed limits, traffic patterns, and road networks. I did so without really thinking; it’s hard to miss speed signs, and it’s easy enough to navigate a two-lane highway in the Panhandle versus the 26-lane mega-highway through Houston. That’s how state policymaking is supposed to work – tailored to the conditions and preferences of a single state, while not inhibiting interstate commerce. 

Yet many state legislators are advancing AI policies that amount to allowing each state to regulate a car’s engine. Imagine a cross-country road trip where you had to switch out your engine every time you entered a new state. That’s a trip you likely wouldn’t take. 

Few folks would be excited by the prospect of pulling off the road, paying a mechanic to install a state-sanctioned engine, and then carrying on. The net result would be a less mobile, free public. Many more people would effectively be confined to their state’s borders. Such regulations would surely impact commerce, too. Many small businesses would likely forgo interstate opportunities. Only massive corporations could afford the time and financial expenses associated with state-by-state engine laws. Commerce, then, would increasingly be concentrated in the hands of a few powerful interests. 

The Founders sought to avoid a fragmented economy that inhibits individual agency and fosters monopolistic behavior that sparked the Revolution. Contemporary departures from this vision are based on at least two faulty ideas: first, that the police powers reserved to the states via the Tenth Amendment permit them to regulate all aspects of AI, and second, that states operating as laboratories of democracy grant them free reign to experiment with AI policies. 

The Nature of AI Regulations

Not all AI regulations are created equal. Well-intentioned laws may have unintentional consequences. A lack of nuance in how AI is defined and a failure to consider the technical aspects of its development can turn a seemingly simple AI regulation into one with long-lasting, extraterritorial effects. 

AI itself is an umbrella term for an increasingly broad set of distinct technologies and use cases. World models that power autonomous vehicles (AVs), predictive AI systems that have shaped education tools, and agentic tools that autonomously perform tasks on a user’s behalf are just a few examples of the expanding universe that’s implicated when “AI” is mentioned. Yet many lawmakers and members of the public focus only on generative AI tools, such as early versions of ChatGPT. 

Generative AI tools have rightfully elicited substantial widespread concern. Reports of extensive use of such tools leading to tragic outcomes merit the attention of state legislators and regulators. However, it’s become clear that even the most supposedly AI-savvy jurisdictions are at risk of passing legislation that not only addresses these specific AI tools but also the broader universe of AI use cases. The California state legislature, for example, passed a bill intended to target “chatbot companions.” In practice, the bill would have effectively “ban[ned] access of anyone under 18 to general-purpose AI or other covered products,” as interpreted by the Computer and Communications Industry Association. Though Governor Gavin Newsom ultimately vetoed that bill, the fact that it was on the brink of becoming law despite such an expansive definition should raise red flags. Other governors may miss similarly flawed laws. 

This shows the importance of probing each state law for compliance with the regulatory authority and the competence of each state. If lawmakers are not careful, their laws may saddle the development of AI tools with different risk profiles. All Americans have an interest in avoiding such an outcome. If California errs by regulating world models alongside chatbots, for example, the rest of us will suffer from lost innovation — in this case, even safer AVs and whatever other innovations world models could bring about.

Caution is also necessary when state lawmakers try to regulate AI development, including the training and deployment of new AI models. Like a car’s engine, how an AI company trains and refines its models shapes its capabilities, characteristics, and value. In the same way that it’s practically infeasible to switch engines at every border, AI companies cannot comply with 50 different requirements for how to train or deploy their models. AI training is an incredibly resource-intensive endeavor. A single AI training effort may span three to four months and involve hundreds of millions of dollars worth of computational resources. As Rep. Ted Lieu acknowledged, a lab would not be able to financially comply with even a few state-imposed obligations. 

Here again, state legislators have moved forward with laws that implicate this critical stage of AI innovation. New York legislators passed the RAISE Act, which, if signed by Governor Kathy Hochul, would ban developers from releasing a model that “would create an unreasonable risk of critical harm.” Compliance with this prohibition would require labs to devise a set of specific tests and evaluations that address the State of New York's concerns. As AI advances and regulatory concerns evolve, the steps labs must take to meet this test may shift, imposing new compliance costs. This single provision, in and of itself, marks a significant regulatory hurdle — now imagine tasking AI labs with meeting such varied tests across multiple tasks. 

This is not to say that New York and others have no means to regulate AI tools that may pose real and troubling risks. Just as states have speed limits that force drivers to alter their driving behavior – not their vehicle – states can and should likewise regulate AI to comport with the needs and preferences of their residents. This sort of regulation aligns with the historical understanding of each state's regulatory authority and promises to deliver the intended benefits of states acting as laboratories of policy. 

The Limits of State Police Power

States have extensive, yet finite, authority to pass laws pursuant to their police powers. Though the Constitution makes no mention of or definition of police powers, it has been widely recognized that the Founding generation sought to leave ample room for states to exercise "legislative and governmental power to provide for the countless and conceivable emergencies of local government." This is not a grant of absolute sovereignty. However, the founders set forth a wide lane for states, as the Supreme Court declared in 1827, "The framers of the Constitution did not intend to restrain state in the regulation of their civil institutions adopted for internal government[.]" Each state was expected to "regulate its police, its domestic trade, and to govern its own citizens," thus the Constitution affords them the power to "legislate on th[ese] subject[s] to a considerable extent."  

Three limits shape the use of this power. The first is territorial. Note the implicit and explicit references to geography in the descriptions above. The second is substantive. Even local matters may not be governed by states if they fall into certain categories. States may not act "under the guise of police power" to "arbitrarily . . . invade the personal rights and liberties of individuals, interfere with private business, or invade property rights." This limit applies to residents and, especially so, to non-residents, who lack any direct means to challenge such laws. The third is evidentiary. A state ought not to pursue regulation under its police powers simply because it can. Instead, the legislature should only act pursuant to that power if it is "necessary" and "the means adopted by it to carry out its objective [are] reasonably necessary and appropriate for its accomplishment." Courts tasked with reviewing such laws are expected to assess "whether the exercise of power is actually necessary for the public good."

Laws like the RAISE Act violate each of these limitations. First, the RAISE Act’s broad scope, which applies to all qualifying models that are “developed, deployed, or operating in whole or in part in New York,” will extend its provisions to other states. Second, as the Act’s sponsors have acknowledged, they designed the law to address what they regard as national issues, rather than local concerns. And, third, the Act imposes requirements that have yet to be validated as effective. As the Joint California Policy Working Group on AI Frontier Models disclosed in a June 2025 report, the current state of AI evaluation and testing is evolving — experts have yet to agree on the necessary steps to release “safe” models. 

Despite these signs that expansive state AI bills are running beyond the bounds of their respective police powers, states are poised to plow ahead with related bills in 2026. 

The Limits on States and Laboratories of Democracy

A preferable regulatory approach would involve states passing narrower policies that address how AI is used or deployed — the equivalent of states imposing different driver’s license exams and setting different speed limits. These policies would also help realize the true aim of experimentation: each state having the opportunity to test novel approaches without becoming the guinea pigs in another state’s policy experiments.

Here’s how Judge Frank Easterbrook characterized the proper conditions for the sort of policy competition among states that can facilitate learning and inform better policymaking:

  1. people and resources can freely move to states of their choice;
  2. the number of states participating is high;
  3. states are free to opt in or out of any set of laws they so choose; and,
  4. the effects of any state’s laws are confined solely to the residents of that state.

As the prior analysis of the RAISE Act made clear, many states are on the verge of violating one or more of these conditions. States like California, New York, Illinois, and Colorado have rushed ahead of their sister states by enacting more substantive AI regulations. This conflicts with the importance of having a wide range of experiments. When these states pass laws that extend beyond their borders, the third and fourth conditions are violated. Finally, AI development so far has predominantly taken place in only a handful of states that had earlier accumulated the necessary financial and talent capital, which calls into question the first condition. 

The upshot is that it’s not at all clear that the current AI policy landscape is the sort that the Founders thought would help the most appropriate regulations rise to the top. Instead of fostering a healthy, confined competition among states, the current environment is creating regulatory spillovers that transform sister states into unwilling policy subjects. When New York or California mandate resource-intensive testing and evaluation for model development that must, by necessity, be performed globally, they are effectively imposing their regulatory regime on labs operating in Texas, Washington, or Massachusetts. This dynamic violates the fundamental principle that state policies should be tested locally before being adopted nationally, short-circuiting the beneficial feedback loop the Founders envisioned. Furthermore, by speeding with highly prescriptive development laws, states are not competing on policy innovation but are instead entering a policy arms race — a race whose main casualty will be the mobility of AI talent and capital necessary to maintain American leadership in this critical technology. True policy innovation in AI requires states to respect the boundaries of their authority and allow the freedom of movement for both people and resources that defines a truly national market.

Conclusion

The road to AI governance is paved not with 50 divergent state mandates, but with clear, uniform rules for technologies inherently interstate and global in nature. The current trend of states overreaching their traditional police powers to regulate the development of the computational “engine” of AI models risks fracturing the American economy, stifling innovation that benefits all citizens, and ultimately undermining core democratic values like individual agency and economic liberty. When a state attempts to solve a national problem, such as how to develop frontier models responsibly, it inadvertently impairs the regulatory authority of another state over local issues; the constitutional limits designed to prevent such a fragmented system have been breached. State legislatures must urgently recalibrate their approach to focus on regulating AI use, where their territorial, substantive, and evidentiary authorities are strongest — the policy equivalent of setting a speed limit, not redesigning the vehicle.

This recalibration, however, is not sufficient. The burden of avoiding this interstate commerce catastrophe ultimately rests on Congress. As the only body vested with the authority to "regulate Commerce... among the several States," it is imperative that the federal government step up to provide a national clarity framework. This framework should establish baseline standards for high-risk AI models, preempting the destructive patchwork of conflicting, extraterritorial state development requirements that discourage investment and increase compliance costs for both established and emerging innovators. Failing to act ensures that only the largest corporations, capable of navigating a labyrinth of conflicting state engine laws, will survive, further concentrating economic power and undermining the competitive spirit that drives American technological leadership.

To lead in the AI era, the United States must present a unified economic and regulatory front. Without immediate federal preemption in the domain of AI model development, and a renewed commitment from states to stay in their proper lanes of use-based regulation, the innovative capacity of American AI risks being confined not by technological limits, but by the artificial, costly, and constitutionally questionable borders of state law. This is a journey the nation cannot afford to take.

Kevin Frazier is the AI Innovation and Law Fellow at the University of Texas School of Law and co-host of the Scaling Laws podcast.

10:13
1x
10:13
More articles

The Trump National Security Strategy Is Good, Bad, and Ugly All at Once

Politics
Dec 16, 2025

U.S. Can’t Cave to Europe’s Anti-Growth Agenda

Economic Dynamism
Dec 15, 2025
View all

Join the newsletter

Receive new publications, news, and updates from the Civitas Institute.

Sign up
More on

Politics

Liberal Democracy Reexamined: Leo Strauss on Alexis de Tocqueville

This article explores Leo Strauss’s thoughts on Alexis de Tocqueville in his 1954 “Natural Right” course transcript.

Raúl Rodríguez
Politics
Feb 25, 2025
Long Distance Migration as a Two-Step Sorting Process: The Resettlement of Californians in Texas

Here we press the question of whether the well-documented stream of migrants relocating from California to Texas has been sufficient to alter the political complexion of the destination state.

James Gimpel, Daron Shaw
Politics
Feb 6, 2025
Who's That Knocking? A Study of the Strategic Choices Facing Large-Scale Grassroots Canvassing Efforts

Although there is a consensus that personalized forms of campaign outreach are more likely to be effective at either mobilizing or even persuading voters, there remains uncertainty about how campaigns should implement get-out-the-vote (GOTV) programs, especially at a truly expansive scale.

Grant Ferguson, James Gimpel, Mark Owens, Daron Shaw
Politics
Dec 13, 2024
National Poll from Civitas Institute: Trump Victory Driven by Voters Who Reject Status Quo

The poll asked 1,200 Americans an array of questions about how things are going in America.

Daron Shaw
Politics
Dec 11, 2024

The Three Whiskey Happy Hour

Steven Hayward brings you the Power Line Blog's perspective on the week's big headlines.

View all
** items
California job cuts will hurt Gavin Newsom’s White House run

California Governor Gavin Newsom loves to describe his state as “an economic powerhouse”. Yet he’s far more reluctant to acknowledge its dramatically worsening employment picture.

Politics
Dec 10, 2025
An anti-woke counter-revolution is sweeping through the media

From Hollywood to the newsroom, the hegemony of the ‘progressives’ is finally faltering.

Politics
Dec 1, 2025
California’s billionaire tax could bring down Gavin Newsom

Gavin Newsom’s run for the White House is going from bad to worse.

Politics
Nov 17, 2025
Mayors to Cities: Drop Dead

Far-left policies on policing, education, and taxation are pushing Los Angeles, Chicago, and others to the brink.

Politics
Nov 15, 2025

Kotkin: Non-Aligned Nations Navigating a Multipolar World

Politics
Aug 19, 2025
1:05

Wall Street Journal: Donald Trump Takes On the Conservative Judiciary

Politics
Jun 2, 2025
1:05

Trump’s Drug Pricing Plan: Consequences for Innovation and Patient Access

Politics
May 13, 2025
1:05

John Yoo: The DOJ Is Being ‘Tricky’ but They May Be Right

Politics
Mar 18, 2025
1:05

John Yoo: How Will Trump Try to ‘Redirect’ the Justice Department Toward ‘Public Order and Safety’?

Politics
Mar 14, 2025
1:05
No items found.
No items found.
The Trump National Security Strategy Is Good, Bad, and Ugly All at Once

Like the classic Clint Eastwood western, the Trump Administration should have titled its National Security Strategy: “The Good, the Bad, and the Ugly.”

John Yoo
Politics
Dec 16, 2025
“Brazenly Partisan” Judges Scrutinize Trump’s Mind, But Refuse To Explain Themselves

Josh Blackman investigates how, when it comes to rooting out judicial misconduct, federal judges hide behind a veil of ignorance.

Josh Blackman
Politics
Dec 11, 2025
What Did the Godfather of Conservatism Think about the Jewish People?

Gregory M. Collins examines the origins of conservatism’s engagement with the Jewish people in Edmund Burke's political philosophy.

Gregory M. Collins
Politics
Dec 10, 2025
Inflation Killed The Penny

The penny's loss demonstrates that America has tolerated half a century of inflation while excusing it as mere supply disruption.

Jonathan Hartley
Politics
Dec 9, 2025
No items found.