
The AI Future: Between Certain Doom and Endless Prosperity
If we resist the pull of extremes and commit to disciplined, rights-respecting, iterative governance, the AI age will not be defined by doom or delirium.
The generative AI tools that dominate headlines today were introduced more than three years ago. Technologists would tell you that the models have advanced significantly over that period. In fact, they’d likely say that AI has progressed beyond their wildest expectations. Experts at the leading AI labs may discuss how their models can do hours-long tasks on behalf of users. Startups’ founders may brag about AI tools that make doctors drastically more accurate and allow them to spend far more time with patients. Researchers at so-called “neolabs” may talk your ear off about new models with capabilities and characteristics even more impressive than those on the market today. The consensus will be that AI has become more complex and sophisticated. The same is not true of popular discourse around AI nor our public policy solutions.
Since early 2023, the AI discourse in the popular press and in legislative chambers has been defined by extremes. Then-Majority Leader Chuck Schumer invited AI experts to the Senate and heard extensively about the existential risks posed by AI. He wasn’t the only one to associate AI with the potential end of humanity. Then-FTC Chair Lina Khan shared that she had a p(doom) of around 15 percent — her odds that AI would cause a cataclysmic event. A sense of inevitable demise continues to pervade some AI conversations. Dario Amodei, CEO of Anthropic, recently forecasted that AI would displace most entry-level white collar jobs in the span of just a few years. Others have envisioned AI contributing to authoritarianism and geopolitical order. Yet, not everyone has settled on this picture of the AI Age.
Today, plenty of folks are convinced of exactly the opposite. Tech luminaries such as Elon Musk envision a bright future in which humanity is surrounded by abundance. Conversations around the end of work, universal basic income, and similar utopian outcomes (to some) pass for normal chatter these days. Perhaps paradoxically, some AI experts simultaneously suspect that dire and dreamy futures could lie ahead. Amodei, for one, has touted the possibility of AI curing most cancers.
Where does this leave most Americans? What does this mean for AI regulation?
It means we’re dodging the much harder, more boring, and more detailed conversations we ought to be having about how to adjust to this new technology. The questions asked in polls, the headlines in the press, and the guests filling our podcast feeds have made it all seem like an all-or-nothing proposition. In turn, popular attention and legislative resources have been spent on edge cases. Legislators have been captivated by reports that AI will become effectively ungovernable by 2027. Communities have zeroed in on troubling and tragic stories of unrepresentative use cases of AI tools. And the AI celebrities that garner the most public attention seem keen to fuel these flames.
Critically, this status quo is disempowering. Those who bought into the utopian vision may rest on others’ promises and simply await a bright future. Those sold on the end of the world presumably feel powerless to stop the AI madmen’s march. We’re consequently missing the often determinative sway of the silent majority. Those who have yet to join one camp or the other simply want society to adjust as necessary to drive and spread human flourishing.
Preparing for the AI future means settling in for a decade (if not decades) of adjustment and transition. That’s part of the reason an essay by Matt Schumer went viral -- he made clear that the tidy futures painted by some are unlikely and that instead we’re going to experience a mix of wonderful progress and difficult setbacks. People will lose jobs. People will experience mental disquiet when exposed to new ideas. Communities will change. Culture will shift. And politics will have to improve and not be subject to knee-jerk reactions if we’re going to navigate all these alterations.
Success amid technological transition requires the discipline to avoid distractions from doomers and excessive dreamers. Add to that the need for persistence: methodically testing, measuring, and revising new strategies to revive and spread prosperity. A few principles should inform this effort:
- Long-term adjustments are necessary. Thinking over short-term time horizons — such as which policies are most likely to appease voters in November — will result in false starts. Policies like robot taxes have a certain appeal when tech is framed as the only source of our woes. We have to resist the temptation to buy into policies that seem too good to be true (because they are).
- Typical regulatory regimes are inadequate. Government policies are not designed to move at the speed of AI. New tools are being deployed, tested, and revised faster than any committee can keep pace with. Moreover, labs themselves are adjusting their internal policies in rapid response to user and public feedback. Rather than view this as a problem, it’s best to view it as an opportunity to update how we write, enforce, and measure laws.
- Fundamental rights must be safeguarded. Compelling solutions will emerge from strange places and cobble together broad support. The haste to “do something” to ease the unease of not knowing what the future holds will be tough to overcome, yet there must always be a backstop. Our freedom to think, to work, to raise our families, to practice our faith, to receive information, and to convey our ideas must be shielded from unnecessary and unjustified government intervention, even if supposedly motivated by popular support.
If we get this wrong, it will not be because we failed to predict the precise year when models surpass human performance on a benchmark, or because we miscalculated the probability of catastrophe. It will be because we chose spectacle over stewardship.
The AI transition is not a movie trailer. It is infrastructure. It is procurement reform. It is the licensing boards that decide whether to accept AI-assisted credentials. It is state workforce agencies rethinking training pipelines. It is judges grappling with evidentiary standards. It is school districts that determine how to teach writing in an era of copilots. None of that fits neatly into a p(doom) estimate or a utopian keynote. All of it determines whether this technology expands opportunity or narrows it.
The task before us is neither to freeze AI in place nor to surrender to it. It is to govern a moving target without pretending it will stand still. That requires humility about forecasts, seriousness about tradeoffs, and a willingness to iterate. We will need rigorously evaluated pilot programs. We will need regulatory frameworks that include sunset provisions and mandatory review. We will need agencies that measure outcomes rather than merely promulgate rules. And we will need political leaders who can say, without embarrassment, “We tried this. It did not work. We are adjusting.”
Most importantly, we need to re-center the conversation on agency, not the agency of AI systems, but the agency of citizens. Americans are not passive recipients of technological change. They are workers who retrain, entrepreneurs who experiment, parents who adapt, and voters who demand accountability. A serious AI policy agenda should equip them to navigate transition, not treat them as subjects to be managed.
There will be disruption. There will be overcorrections. There will be bad actors. But there will also be new firms, new forms of work, new medical breakthroughs, and new ways of learning. The question is not whether AI will change society. It already has. The question is whether we will do the slow, unspectacular work required to shape that change in ways consistent with a free and flourishing republic.
The loudest voices will continue to sell certainty. They will promise salvation or warn of extinction. The harder path is less emotionally satisfying. It asks for patience, institutional reform, and sustained civic attention. Yet history suggests that democratic societies succeed not by perfectly forecasting the future, but by building systems capable of adapting to it.
If we resist the pull of extremes and commit to disciplined, rights-respecting, iterative governance, the AI age will not be defined by doom or delirium. It will be defined by whether we have the maturity to match technological acceleration with institutional evolution. That work does not lend itself to viral clips. It does, however, determine whether the next decade of AI becomes a story of concentrated power and public anxiety or of broad participation and renewed confidence in our capacity to govern ourselves.
Kevin Frazier directs the AI Innovation and Law Program at the University of Texas School of Law. He is also a Senior Fellow at the Abundance Institute and an Adjunct Research Fellow at the Cato Institute.
Pursuit of Happiness

The Rise of Latino America
In The Rise of Latino America, Hernandez & Kotkin argue that Latinos, who are projected to become America’s largest ethnic group, are a dynamic force shaping the nation’s demographic, economic, and cultural future. Far from being a marginalized group defined by oppression, Latinos are integral to America’s story. They drive economic growth, cultural evolution, and workforce vitality. Challenges, however, including poverty, educational disparities, and restrictive policies, threaten their upward mobility. Policymakers who wish to harness Latino potential to ensure national prosperity and resilience should adopt policies that prioritize affordability, safety, and economic opportunity over ideological constraints.

Exodus: Affordability Crisis Sends Americans Packing From Big Cities
The first in a two-part series about the Great Dispersion of Americans across the country.

The Castle, the Cathedral, and the College
Our civilization struggles to explain why anything should command allegiance beyond preference or power; its remnants echo a grandeur now distant.

Politics and the Possibility of the Humanities
Yes, universities have a political aspect; they grant degrees, but their purpose is ultimately to inquire into what makes human beings flourish.
















