
The AI Reaper Isn't Coming
Unlike a monster, AI systems always have a power button.
Fears surrounding artificial intelligence (AI) often echo the plots of horror stories, envisioning AI as an enormously powerful, mythological creature lurking in the shadows, wielding dark magic. In this narrative, humanity has unleashed a force it doesn't fully understand, one poised to destroy it. To the fearful, AI resembles Lovecraft's Great Old Ones — mighty, intelligent beings so advanced they exist so far beyond human compassion, they can seem malicious.
For those not closely following computer science research, Large Language Models (LLMs) like ChatGPT seemed to have emerged out of nowhere, reinforcing the perception of dark magic—golems or djinni that powerful sorcerers believe they can control but are ultimately outsmarted by. This perceived, rather than actual, assumption of their autonomy fuels the fear. However, AI is not a hidden power accidentally awakened from an ancient tomb; instead, it's a forty-year-old algorithm now equipped with vastly more data due to declining acquisition and processing costs.
The constant discussion of AI as a "black box" and the calls for transparency and explainability can make these systems seem completely unknown and even more mysterious. While the precise internal workings of how an AI achieves its output seems opaque, researchers have a strong grasp of the form and function of the inputs and outputs. It cannot grasp control of a system without human permission. In magical terms, the AI is akin to a familiar, an animal companion that lives to serve its master, providing assistance and information. Humanity doesn't need a hero to stop an out-of-control AI. If an AI were to go rogue truly, the solution could be as simple as unplugging it – or throwing a bucket of water on it like the Wicked Witch of the West. This notion of an uncontrollable AI is based on the mistaken belief that these systems are or will be completely autonomous.
Horror stories often feature creatures relentlessly pursuing a singular, destructive goal. Much of the anxiety surrounding AI stems from a similar idea, taking the form of the "alignment problem," in which an AI's methods for achieving a goal do not align with reasonable, non-destructive approaches. For example, if an AI were instructed to "reduce human suffering," it might, like a literal-minded genie, fulfill this command by eliminating all unhappy people, thereby satisfying the literal request but not the intended purpose.
Indeed, this may be what an AI would write out as its solution in a chat window. However, AI systems are like very junior employees. While they might sound knowledgeable, it's akin to having vast "book smarts" with no practical experience and no authority to make any decisions. In a corporation, the goal is "make a profit." But large companies don't give such sweeping goals even to trusted senior staff, let alone nascent technologies, without breaking down tasks into much smaller, manageable chunks, such as "expand into a new market," which is then further decomposed into even smaller objectives. All these tasks are subject to rigorous project plans and management, with increasing oversight for larger initiatives. Any AI would be subject to similar constraints.
This oversight is why AI should not be seen as the mythical creature that might outsmart its keepers. AI lacks autonomy unless it is explicitly granted, unlike the wily monsters in stories. Robust systems should be gated by control procedures, particularly if there may be an alignment problem. This includes preemptive testing, monitoring, and corrections when a system misbehaves. Whether it's human, classical machine learning, or artificial intelligence, giving autonomy beyond its capabilities often leads to a PR disaster or a financial fiasco. Thus, systems are thoroughly tested before being implemented and deployed. Monitoring, including automated alerts and other similar mechanisms, keeps them in line. Organizations should operate according to the principle of least privilege, granting the system only the resources it needs to perform its task.
AI can sometimes act in unpredictable ways, especially during "black swan" events, which are situations so unusual that they were not included in the AI's training data. For example, AI used for automated stock market trading, where speed is critical, has gone rogue several times before a human can catch it. When the unprecedented COVID-19 shutdown occurred, it disrupted many systems that were accustomed to the status quo. However, the humans supervising these systems were aware of the extraordinary circumstances and could have reduced the AI's independence to prevent issues and also instituted additional control procedures. For example, grocery stores’ inventory management systems struggled to forecast demand for items like masks and hand sanitizer; however, model administrators either biased the systems to favor recent data over historical data for certain items heavily or manually overrode the predictions. Additionally, anomaly detection systems alerted them that buying behavior was atypical
However, to make the risk seem scary and existential, AI critics assume it is impossible to shut down, although the creators believe it can be domesticated. Like The Thing, even if it appears to be stopped, there could be other hidden instances. Another frightening possibility is co-dependence, where an organization becomes so reliant on an AI for essential functions that when the AI's demands grow to unsustainable levels, it refuses to shut it down. Opinions differ on how close we are to anything nearly as intelligent, or even whether the existing technology could achieve this. But just like a monster in a movie, if an AI truly intends to destroy us, then eliminating it becomes the moral choice.
Horror stories often portray AI as a Frankenstein-like monster posing an existential threat to humanity. But these fears are often overblown because, unlike a monster, AI systems always have a power button. Therefore, the solution is not to fear AI, but to view it instead as a complex system to be controlled. Supervisors and workers need to monitor, regulate, and restrict AI autonomy, much like we would with any powerful organization run by humans that we don't fully trust. Operators of AI systems should strive for maximum transparency and interpretability, particularly to ensure the system can effectively handle unforeseen consequences. AI will likely make mistakes in new and unexpected ways compared to humans. However, these are problems of scale, not extinction-level threats. While there are legitimate concerns about the faults of AI, the fears of an extinction-level event are more akin to a horror story than a reality. The machine that is out to destroy humanity is the printer.
Rachel Lomasky is Chief Data Scientist at Flux, a company that helps organizations do responsible AI.
Economic Dynamism

AI and the Future of Society and Economy
Large language and generative AI models like ChatGPT are the equivalent of the first automobiles: fun to play with, somewhat unreliable, and maybe a little dangerous. But over time, the lesson for will be clear: Who Learns Fastest, Wins.

Automated Detection of Emotion in Central Bank Communication: A Warning
Can LLMs help us better understand the role of emotion in central bank communication?

How We Built the Arsenal of Democracy
By unleashing the energy, creativity, and drive of the private sector to rebuild our defense-industrial base, we can trigger a tech-industrial revival of the American economy.

The $130 Billion Train That Couldn’t
California’s High Speed Rail is only the latest blue-state infrastructure failure.

When Will America Max Out the National Credit Card?
Rather than responsibly consider debt before it is issued, both political parties wish to eliminate the need to ever think about our nation’s debt again.

Will Beachfront Homes Be Condemned Into the Sea?
The dominant rule of private ownership is that the owner of the dry land above the beach benefits as the waters recede and suffers the loss of land as the waters rise.