Example Image
Civitas Outlook
Topic
Politics
Published on
May 12, 2026
Contributors
Kevin Frazier
(Shutterstock)

The End of the AI Binary

Contributors
Kevin Frazier
Kevin Frazier
Senior Fellow
Kevin Frazier
Summary
The collapse of the accelerationist-doomer binary is overdue.
Summary
The collapse of the accelerationist-doomer binary is overdue.
Listen to this article

In a matter of days, the nature of AI policy debates has undergone a sea change. In simplified (yet fairly accurate) terms, prior to May, the debate pitted accelerationists against doomers. The former advocate for the rapid deployment of even highly capable AI models with the goal of diffusing their benefits as quickly as possible. The latter demand extensive testing of AI models prior to their general release to mitigate against real harms and, up to this point, hypothetical risks.

The Trump Administration was regarded as having adopted a fiercely accelerationist stance. Vice President J.D. Vance dismissed concerns about safety in a barnburner speech at the Paris AI Action Summit. The Administration rescinded Biden-era AI policies that it regarded as barriers to America’s ongoing efforts to dominate AI development and diffusion. It later issued an AI Action Plan that emphasized “harnessing the full power of American innovation” and listed recommendations that generally aligned with a light-touch approach to AI governance. President Trump also announced the formation of a task force within the Department of Justice to challenge state laws that unduly interfere with AI development and the nation’s AI ambitions. White House staffers have since acted on that broad policy direction, leaning heavily on state legislators contemplating AI policies they suspected might interfere with the President’s AI vision. 

The times have changed. 

And, likely for the better, so long as Congress takes the lead in adopting a more nuanced, empirically-driven approach to AI governance rather than defaulting to executive actions that may lack clear legal authority as well as durability.

Advances in AI and Shifts in the White House’s Approach

Independent assessments of the latest AI models indicate that public and private actors, including operators of critical infrastructure such as banks and hospitals, are unprepared for feasible, destructive uses of these tools. Evaluation of Anthropic’s Mythos and OpenAI’s ChatGPT 5.5 by the United Kingdom’s AI Security Institute (AISI) revealed that a low-sophistication bad actor could use those models to successfully carry out extensive cyberattacks. The AISI’s findings aligned with internal assessments completed by the respective labs. 

Both Anthropic and OpenAI decided to withhold general access to such tools. The labs acted on concerns about the damage that could result if bad actors were able to get their hands on the latest and greatest AI at the same time as the general public. Many of the nation’s civil and military authorities would be in a vulnerable position if they suddenly found themselves pitted against many more cyberattackers leveraging incredibly powerful tools. 

A quick example: a 2025 report by the Inspector General of the Department of Defense recently concluded that the Department of the Navy has "made minimal progress in mitigating cybersecurity vulnerabilities." The Navy was faulted for failing to establish basic components of cyber readiness, such as clearly defining which actors were responsible for protecting which systems and developing plans to respond to potential exploitation of vulnerabilities in those systems. Absent such safeguards, the report flagged that adversaries or malicious actors have "opportunities to adversely affect critical missions or functions and the [Navy's] ability to deploy, support, and sustain military forces worldwide." 

The Navy is not alone in failing to adopt best cyber practices. The Department of Homeland Security was also recently flagged for being behind the curve on key cyber practices. More generally, a report by the Center for Strategic and International Studies (CSIS) indicated that such concerns exist across the 16 sectors that make up the nation’s critical infrastructure:

Approximately 50 to 85 percent of critical infrastructure is privately owned, depending on the sector, from large corporations to small and medium-sized businesses (SMBs). This fragmented ownership creates a patchwork of varying levels of cybersecurity readiness, with many running legacy systems no longer supported by vendors and unable to be patched or updated to protect against modern cyber threats.

The White House appears to have connected the dots between evidence of highly capable AI tools and ongoing concerns about cyber vulnerabilities in critical domains. The New York Times reported that the President’s team was weighing a pre-deployment review system. National Economic Council Director Kevin Hassett then told Fox News that an FDA-like process may be necessary to ensure AI models are safe prior to general release. The Wall Street Journal disclosed that China and the US may add AI to their agenda at an upcoming meeting. More specifically, this may mark the start of a "recurring set of conversations that could address the risks posed by AI models."

A previously blunt policy posture–AI everywhere, now–seems to have been reshaped in light of compelling signs of AI advances that will likely continue. What’s left to be decided is whether the President will respond with a regulatory framework tailored to the unique and shifting nature of AI and aligned with constitutional principles.

Principles to Guide Next Steps

Legislative and executive officials must develop a framework to ensure broad access to the best AI tools as soon as possible, while safeguarding critical infrastructure. Members of the general public do not need immediate access to AI tools with incredibly sophisticated cyber capabilities. They do need functioning hospitals, dams, election systems, and financial institutions. Three principles should guide how the federal government moves forward:

  1. The President does not currently have a legal basis to establish a pre-deployment vetting regime

The Defense Production Act and related existing emergency provisions were not enacted with such authority in mind. Despite the exigencies of this moment, the Constitution does not condone vast expansions of executive power without identified legal authority. As Supreme Court Justice Jackson pointed out in a 1952 decision that denied President Truman the authority to seize steel mills during the Korean War during an ongoing labor dispute: “That comprehensive and undefined presidential powers hold both practical advantages and grave dangers for the country will impress anyone who has served as legal adviser to a President in time of transition and public anxiety.” Yet, public anxiety does not excuse any presidential act. 

“When the President takes measures incompatible with the expressed or implied will of Congress,” Justice Jackson pointed out, “his power is at its lowest ebb, for then he can rely only upon his own constitutional powers minus any constitutional powers of Congress over the matter.” Congress has had ample opportunity since 2022 — the release of ChatGPT-3.5 — to consider and advance legislation that would impose process checks on AI deployment. It has held dozens, if not hundreds, of hearings on the topic. It had closed-door meetings with high-ranking AI lab officials. It has so far opted not to act. But inaction is not an invitation for a presidential response.

Congress must pass a new law to establish a mandatory vetting process. 

  1. The federal government needs to increase its capacity to distinguish between AI models that pose cybersecurity and related risks and those that can be released with minimal foreseeable harms

Contrary to popular belief, it is not impossible to gain a firm understanding of the capabilities of AI models. The UK AISI, for example, has invested heavily in developing and running simulations that provide a fairly reliable signal as to how AI models could be used in the wild. The Center for AI Standards and Innovation (CAISI), part of the National Institute of Standards and Technology, is similarly devising novel methods to test AI models. Several leading labs have voluntarily agreed to subject their models to CAISI testing, which should help generate as much reliable information as possible about these models. Yet, for CAISI to do this work well, it would require substantially more resources and talent. Expert analysis of highly capable models requires insights from experts that are in high demand and, by extension, require high pay and access to expensive resources. 

Congress needs to scale up the government’s capacity to understand the latest AI models so that safe systems can be deployed to the public as soon as possible. It can start by passing the bipartisan AI Talent Act. 

  1. All subsequent AI governance must be iterative and evidence-based

AI capabilities are not static, and neither is the science of evaluating them. Labs are making real progress on interpretability and safer model design. CAISI's testing protocols will improve. The risk profile of frontier systems will shift. Any vetting regime that hardens into a permanent process will calcify around yesterday's threat models—Congress should build sunset provisions, mandatory review windows, and clear off-ramps into whatever it enacts.

Conclusion

The collapse of the accelerationist-doomer binary is overdue. It was always a false choice. One can believe that AI represents the most consequential technology of this century, that its rapid diffusion will generate enormous gains for ordinary Americans, and that the United States must lead its development—and still recognize that releasing the most capable cyber-offensive tools into a country whose hospitals, water systems, and financial institutions cannot defend themselves is a recipe for very bad outcomes. 

Pro-AI does not require being pro-recklessness. It means building the conditions for AI to be deployed widely and quickly while permitting core institutions to keep pace. The work ahead is to close the gap between what frontier models can do and what civil society, private firms, and public agencies are prepared to handle. That posture ought not be conflated with a brake on progress. It is what progress actually requires. Public backlash to AI will only grow if they perceive government officials as knowingly bypassing safeguards. For AI labs to earn and maintain a social license to operate, and for the government to progress toward its stated AI objectives, a nuanced, Congress-led framework is necessary. 

Kevin Frazier is an Adjunct Research Fellow at the Cato Institute, a Senior Fellow at the Abundance Institute, and the Director of the AI Innovation and Law Program at the University of Texas School of Law.

10:13
1x
10:13
More articles

The “Science Charade” After 'Chevron'

Constitutionalism
May 12, 2026

Founders Versus Managers: America’s Endless Civil War

Politics
May 11, 2026
View all

Join the newsletter

Receive new publications, news, and updates from the Civitas Institute.

Sign up
More on

Politics

Is American Nationalism Still Creed-able?

We are not there now, but there is reason to worry that the United States is in danger, if we are not careful, of ceasing to be a nation with the principles of 1776 at its core.

Richard Samuelson
Politics
Apr 29, 2026
National Civitas Institute Poll: Americans are Anxious and Frustrated, Creating a Challenging Environment for Leaders

The poll reveals a deeply pessimistic American electorate, with a majority convinced the nation is on the wrong track.

Politics
Feb 19, 2026
Liberal Democracy Reexamined: Leo Strauss on Alexis de Tocqueville

This article explores Leo Strauss’s thoughts on Alexis de Tocqueville in his 1954 “Natural Right” course transcript.

Raúl Rodríguez
Politics
Feb 25, 2025
Long Distance Migration as a Two-Step Sorting Process: The Resettlement of Californians in Texas

Here we press the question of whether the well-documented stream of migrants relocating from California to Texas has been sufficient to alter the political complexion of the destination state.

James Gimpel, Daron Shaw
Politics
Feb 6, 2025

The Three Whiskey Happy Hour

Steven Hayward brings you the Power Line Blog's perspective on the week's big headlines.

View all
** items
Iranian-Americans Want Regime Gone — Aside From a Few Gun-Running Social Media Mavens

Joel Kotkin
Politics
May 4, 2026
California is Sacrificing its Economic Future on the Altar of Climate Change

John Yoo, Michael Toth
Politics
Apr 26, 2026
The Left’s War on the Supreme Court Just Hit a Terrifying New Low

John Yoo
Politics
Apr 25, 2026
Canadians Must Stop Romanticizing a Failing Europe

Joel Kotkin
Politics
Apr 23, 2026

May Day Protests Take Place Across the U.S.

Politics
May 1, 2026
1:05

How Gavin Newsom Ran California Into The Ground

Politics
Apr 30, 2026
1:05

WHCD Shooting Suspect Allegedly Targeted Trump Administration Officials

Politics
Apr 26, 2026
1:05

All federal law enforcement agencies must answer to the president: Former deputy assistant AG

Politics
Apr 16, 2026
1:05

‘NO RIGHT to block passage’: John Yoo on Strait of Hormuz Dispute

Politics
Apr 10, 2026
1:05
No items found.
No items found.
Founders Versus Managers: America’s Endless Civil War

It is the constant struggle between founders and those who prefer managing problems rather than solving them through bold action that has shaped the nation from 1776 until now.

Arthur Herman
Politics
May 11, 2026
Teddy Roosevelt’s Expansive Spirit

Roosevelt left a mark not only on the American presidency but also on the American imagination, continuing to affirm the necessity of the American myth.

Emina Melonic
Politics
May 7, 2026
Trumpadelic

Lives are at stake, not just elections.

Paul J. Larkin
Politics
May 6, 2026
Why Amtrak Needs Airport-Level Security

Cole Allen transporting weapons across the country on Amtrak highlights the issue.

Jonathan Hartley
Politics
May 5, 2026
No items found.