Example Image
Civitas Outlook
Topic
Economic Dynamism
Published on
Aug 5, 2025
Contributors
Rachel Lomasky
(Shutterstock)

The AI Reaper Isn't Coming

Contributors
Rachel Lomasky
Rachel Lomasky
Rachel Lomasky
Summary
Unlike a monster, AI systems always have a power button.

Summary
Unlike a monster, AI systems always have a power button.

Listen to this article

Fears surrounding artificial intelligence (AI) often echo the plots of horror stories, envisioning AI as an enormously powerful, mythological creature lurking in the shadows, wielding dark magic. In this narrative, humanity has unleashed a force it doesn't fully understand, one poised to destroy it. To the fearful, AI resembles Lovecraft's Great Old Ones — mighty, intelligent beings so advanced they exist so far beyond human compassion, they can seem malicious. 

For those not closely following computer science research, Large Language Models (LLMs) like ChatGPT seemed to have emerged out of nowhere, reinforcing the perception of dark magic—golems or djinni that powerful sorcerers believe they can control but are ultimately outsmarted by. This perceived, rather than actual, assumption of their autonomy fuels the fear. However, AI is not a hidden power accidentally awakened from an ancient tomb; instead, it's a forty-year-old algorithm now equipped with vastly more data due to declining acquisition and processing costs.

The constant discussion of AI as a "black box" and the calls for transparency and explainability can make these systems seem completely unknown and even more mysterious. While the precise internal workings of how an AI achieves its output seems opaque, researchers have a strong grasp of the form and function of the inputs and outputs. It cannot grasp control of a system without human permission. In magical terms, the AI is akin to a familiar, an animal companion that lives to serve its master, providing assistance and information.  Humanity doesn't need a hero to stop an out-of-control AI. If an AI were to go rogue truly, the solution could be as simple as unplugging it – or throwing a bucket of water on it like the Wicked Witch of the West. This notion of an uncontrollable AI is based on the mistaken belief that these systems are or will be completely autonomous.

Horror stories often feature creatures relentlessly pursuing a singular, destructive goal. Much of the anxiety surrounding AI stems from a similar idea, taking the form of the "alignment problem," in which an AI's methods for achieving a goal do not align with reasonable, non-destructive approaches. For example, if an AI were instructed to "reduce human suffering," it might, like a literal-minded genie, fulfill this command by eliminating all unhappy people, thereby satisfying the literal request but not the intended purpose.

Indeed, this may be what an AI would write out as its solution in a chat window. However, AI systems are like very junior employees. While they might sound knowledgeable, it's akin to having vast "book smarts" with no practical experience and no authority to make any decisions. In a corporation, the goal is "make a profit." But large companies don't give such sweeping goals even to trusted senior staff, let alone nascent technologies, without breaking down tasks into much smaller, manageable chunks, such as "expand into a new market," which is then further decomposed into even smaller objectives. All these tasks are subject to rigorous project plans and management, with increasing oversight for larger initiatives. Any AI would be subject to similar constraints.

This oversight is why AI should not be seen as the mythical creature that might outsmart its keepers. AI lacks autonomy unless it is explicitly granted, unlike the wily monsters in stories. Robust systems should be gated by control procedures, particularly if there may be an alignment problem. This includes preemptive testing, monitoring, and corrections when a system misbehaves. Whether it's human, classical machine learning, or artificial intelligence, giving autonomy beyond its capabilities often leads to a PR disaster or a financial fiasco. Thus, systems are thoroughly tested before being implemented and deployed. Monitoring, including automated alerts and other similar mechanisms, keeps them in line. Organizations should operate according to the principle of least privilege, granting the system only the resources it needs to perform its task. 

AI can sometimes act in unpredictable ways, especially during "black swan" events, which are situations so unusual that they were not included in the AI's training data. For example, AI used for automated stock market trading, where speed is critical, has gone rogue several times before a human can catch it. When the unprecedented COVID-19 shutdown occurred, it disrupted many systems that were accustomed to the status quo. However, the humans supervising these systems were aware of the extraordinary circumstances and could have reduced the AI's independence to prevent issues and also instituted additional control procedures. For example, grocery stores’ inventory management systems struggled to forecast demand for items like masks and hand sanitizer; however, model administrators either biased the systems to favor recent data over historical data for certain items heavily or manually overrode the predictions. Additionally, anomaly detection systems alerted them that buying behavior was atypical

However, to make the risk seem scary and existential, AI critics assume it is impossible to shut down, although the creators believe it can be domesticated. Like The Thing, even if it appears to be stopped, there could be other hidden instances. Another frightening possibility is co-dependence, where an organization becomes so reliant on an AI for essential functions that when the AI's demands grow to unsustainable levels, it refuses to shut it down. Opinions differ on how close we are to anything nearly as intelligent, or even whether the existing technology could achieve this. But just like a monster in a movie, if an AI truly intends to destroy us, then eliminating it becomes the moral choice.

Horror stories often portray AI as a Frankenstein-like monster posing an existential threat to humanity. But these fears are often overblown because, unlike a monster, AI systems always have a power button. Therefore, the solution is not to fear AI, but to view it instead as a complex system to be controlled. Supervisors and workers need to monitor, regulate, and restrict AI autonomy, much like we would with any powerful organization run by humans that we don't fully trust. Operators of AI systems should strive for maximum transparency and interpretability, particularly to ensure the system can effectively handle unforeseen consequences. AI will likely make mistakes in new and unexpected ways compared to humans. However, these are problems of scale, not extinction-level threats. While there are legitimate concerns about the faults of AI, the fears of an extinction-level event are more akin to a horror story than a reality. The machine that is out to destroy humanity is the printer.

Rachel Lomasky is Chief Data Scientist at Flux, a company that helps organizations do responsible AI.

10:13
1x
10:13
More articles

Slavery and the Republic

Constitutionalism
Feb 20, 2026

When Duvall Played Stalin

Politics
Feb 20, 2026
View all

Join the newsletter

Receive new publications, news, and updates from the Civitas Institute.

Sign up
More on

Economic Dynamism

Do Dynamic Societies Leave Workers Behind Culturally?

Technological change is undoubtedly raising profound metaphysical questions, and thinking clearly about them may be more consequential than ever.

Economic Dynamism
Feb 17, 2026
The War on Disruption

The only way we can challenge stagnation is by attacking the underlying narratives. What today’s societies need is a celebration of messiness.

Economic Dynamism
Feb 9, 2026
Unlocking Public Value: A Proposal for AI Opportunity Zones

Governments often regulate AI’s risks without measuring its rewards—AI Opportunity Zones would flip the script by granting public institutions open access to advanced systems in exchange for transparent, real-world testing that proves their value on society’s toughest challenges.

Economic Dynamism
Feb 4, 2026
The Causal Effect of News on Inflation Expectations

This paper studies the response of household inflation expectations to television news coverage of inflation.

Carola Binder, Pascal Frank, Jane M. Ryngaert
Economic Dynamism
Aug 22, 2025
No items found.
Downtowns are dying, but we know how to save them

Even those who yearn to visit or live in a walkable, dense neighborhood are not going to flock to a place surrounded by a grim urban dystopia.

Economic Dynamism
Feb 3, 2026
The Housing Crisis

Soaring housing costs are driving young people towards socialism—only dispersed development and expanded property ownership can preserve liberal democracy.

Economic Dynamism
Jan 8, 2026
America Needs a Transcontinental Railroad

A proposed merger of Union Pacific and Norfolk Southern would foster efficiencies, but opponents say the deal would kill competition.

Economic Dynamism
Jan 5, 2026
America’s great migration

The young and ambitious are fleeing the stagnant coastal states for the booming heartland.

Economic Dynamism
Dec 21, 2025

18% Poverty Rate in the World's 4th Largest Economy | Joel Kotkin

Economic Dynamism
Jan 27, 2026
1:05

Michael Toth | A Coast-to-Coast Railroad for America

Economic Dynamism
Jan 9, 2026
1:05

Neo-Feudalism: Tech Oligarchs and the Secular "Clerisy"

Economic Dynamism
Oct 20, 2025
1:05

Unlocking Housing Supply: Market-Driven Solutions for Growing Communities

Economic Dynamism
Sep 30, 2025
1:05

Trump’s Tariff-for-Income-Tax Swap

Economic Dynamism
Aug 21, 2025
1:05
The Hidden Costs of Expanding Deposit Insurance

Expanding deposit insurance will only exacerbate financial risk and regulatory dependence, imposing costs on banks, their customers, and taxpayers. 

Daniel J. Smith
Economic Dynamism
Nov 7, 2025
No items found.
Oren Cass's Bad Timing

Cass’s critique misses the most telling point about today’s economy: U.S. companies are on top because they consistently outcompete their global rivals.

Michael Toth
Economic Dynamism
Feb 18, 2026
Blocking AI’s Information Explosion Hurts Everyone

Preventing AI from performing its crucial role of providing information to the public will hinder the lives of those who need it.

Economic Dynamism
Feb 11, 2026
Kevin Warsh’s Challenge to Fed Groupthink

Kevin Warsh understands the Fed’s mandate, respects its independence, and is willing to question comfortable assumptions when the evidence demands it.

Jonathan Hartley
Economic Dynamism
Feb 10, 2026
Oren Cass’s Unquenchable Appetite for Regulation

Cass’s “more regulation” program is just an all-you-can-eat buffet for Wall Street and K Street.

Economic Dynamism
Feb 9, 2026
No items found.