Example Image
Civitas Outlook
Topic
Economic Dynamism
Published on
May 29, 2025
Contributors
Rachel Lomasky
OpenAI CEO Sam Altman attends the AI Revolution Forum in Taipei on September 25, 2023.

Rise of the AI Oligarchy?

Contributors
Rachel Lomasky
Rachel Lomasky
Rachel Lomasky
Summary
The prospect of cutting-edge AI models raises the scenario of an oligarchy controlling what could become an essential new utility.
Summary
The prospect of cutting-edge AI models raises the scenario of an oligarchy controlling what could become an essential new utility.
Listen to this article

A palpable fear emerges from the prospect of cutting-edge AI models, specifically the Large Language Models (LLMs) being hoarded by a select few corporate giants, including Meta, Google, and OpenAI, which alone possess the immense capital necessary for their training. This unsettling scenario envisions an oligarchy controlling what could become an essential new utility, as crucial for societal technological advancement and prosperity as steel and oil were during America's industrialization. Echoing the exploitative near-monopolies of Standard Oil and Carnegie Steel, such centralization risks exacerbating economic inequality and exploiting the general populace. It is seemingly rational to fear ownership of cutting edge AI models being concentrated in a few huge companies, like Meta, Google, and OpenAI, with the astronomical resources to train them.

Adding to this unsettling vision, beyond the unintentional misinformation generated by algorithmic flaws like hallucinations, where the model creates plausible but false information due to compression algorithms, LLM owners could actively manipulate these powerful tools. For example, by deliberately deprioritizing dissenting information or instructing human tuners to propagate certain biases, they could wield AI in deceptive ways. Alternatively, even owners striving for neutrality could inadvertently train their models on biased or erroneous data, propagating untruths. In more nefarious scenarios, oligarchs could transition from passively answering questions to actively generating and broadcasting misinformation, possibly tailored to resonate more effectively with specific demographics or individuals. Due to the inherent opacity of LLMs for both owners and users, detecting such occurrences could prove exceptionally difficult, even without deliberate obstruction. This challenge, however, mirrors many of the known biases and blind spots inherent in traditional human media.

The dominance of a handful of LLM companies undeniably rhymes with the conditions that allowed "robber barons" to flourish. Significant economic barriers to entry, stemming from the immense costs of computation, acquiring and processing internet-scale data, and retaining highly-skilled engineers, limit the number of players. However, the market for large foundation models is far from a monopoly, and offerings exhibit some differentiation, particularly in safety and bias settings, such as varying approaches to what constitutes a "harmful" question. While all major players are indeed large corporations (or backed by them), new serious contenders continue to emerge in the LLM landscape, and significant market consolidation has yet to occur.

Contrary to concerns about price gouging, the LLM market has exhibited a precipitous reduction in model costs. A prime example is ChatGPT 3.5, which launched at $0.002 per 1,000 input tokens but saw its price plummet by 75% to $0.0005 within a single year. Other models have experienced similar significant decreases.  Although cutting-edge models command higher price points, the cost of their predecessors consistently decreases. This aggressive pricing strategy is a direct consequence of intense market competition and the rapid advancement of AI technology.

The burgeoning "open model" ecosystem is exerting considerable pressure on the pricing of closed, exclusive LLMs. These accessible alternatives frequently achieve performance levels comparable to, or nearly on par with, those of the leading proprietary offerings, thereby facilitating broader LLM democratization. The Hugging Face registry alone catalogs 1.7 million open generalist and specialist models, a roster that includes Meta's LLaMA, Mistral AI, DeepSeek's R2, and Google's Gemma (derived from its Gemini research). A critical distinction, however, is that many of these are "open" in terms of released code and weights (the results of training), but not fully "open source" as the foundational training code and data are typically withheld, preventing full reproducibility. However, for most applications, this works well because pre-trained models work well, perhaps with some fine tuning. Moreover, like numerous open-source endeavors, they operate under licenses that govern usage, modification, and distribution, though most are quite permissive, primarily restricting deployment by substantial commercial entities. Nevertheless, the collaborative nature of open-source projects encompasses community-driven development, improvement, and validation, serving as a powerful accelerator that fosters enhanced transparency, rapid iteration, and ultimately, more resilient and sophisticated model development.

Long-term, inference costs—the operational expenditure of utilizing a model to generate content—frequently surpass training costs because training only occurs periodically, whereas inference happens every time the model is used. This is precisely where open models offer a significant advantage, because they can be hosted on platforms that significantly reduce the expertise and infrastructure required to run them. For example, new services like Together AI, RunPod, and Lambda, alongside traditional cloud providers such as AWS and Azure, now facilitate hosting and fine-tuning of open models, often with templates that guide model selection and deployment. This significantly eases the process of setting up new applications that leverage open models and migrating from closed ones. However, it is still not as straightforward as merely calling an API for a closed model.

Moving beyond mere “open models,” efforts are now focused on achieving even greater transparency through truly open-source models trained on independent, decentralized compute clusters. These initiatives replace traditional private GPU clusters with decentralized AI networks, often leveraging blockchain for resource management. Unlike many "open" models, the training process for these truly open-source alternatives is transparent and verifiable by third parties, allowing for independent auditing of bias, performance, and other critical aspects. For example, Bittensor operates a tokenized economy where users earn tokens for training and evaluating models on their own computer and contributing them to the community, with tokens also redeemable for consuming modeling services. Additionally, platforms like Gensyn enable users to train models on idle compute resources, including smaller data centers and personal gaming machines, thereby reducing reliance on large corporations that control vast GPU clusters. Additionally, the new agentic frameworks are composed of components, each of which can communicate with a different LLM. A user could swap out one component without having to overhaul the entire application. Standards are emerging to make this even easier, for example, the new, open-source standard Model Context Protocol introduced by Anthropic.

Governments are actively fostering AI development and access, mirroring the role public universities play in supplementing more costly private education. For instance, collaborative initiatives between industry, academia, and the media have received funding from entities such as the French National Institute for Research in Digital Science and Technology and the German Federal Ministry for Economic Affairs and Climate Action. Similarly, the National Science Foundation in the U.S., in partnership with federal agencies and non-governmental partners, supports a pilot institute dedicated to AI research, including its impact on society. The UAE further exemplifies this trend by releasing its Falcon LLM family under the permissive Apache license. Crucially, despite being developed by government-funded organizations, the open licenses ensure global accessibility for the models beyond their own citizens. Moreover, significant efforts are underway to democratize LLM development beyond English, particularly for resource-poor languages. Thus, even if the corporations become exploitative, the government's actions should serve as a safeguard to allow broad, democratized LLM usage.

Beyond government initiatives, significant trends are driving down the cost of training and deploying language models. Advances in chip technology and algorithmic efficiency are making the underlying infrastructure for LLMs increasingly affordable. Furthermore, Small Language Models (SLMs) offer a compelling alternative, often being exponentially less expensive to train and use while exhibiting only a minor performance drop. In fact, for specific applications, fine-tuning can enable an SLM to outperform a much larger, generalist LLM. Crucially, these smaller models can be trained and fine-tuned on significantly less expensive hardware, including some that can run on a smartphone, greatly expanding the range of compatible hardware providers. This versatility enables training and deployment on a customer's existing hardware, thereby circumventing the need for prohibitively expensive computing clusters.

The ability to easily switch LLMs serves as a critical safeguard against issues such as escalating costs, declining performance, political censorship, or emergent ethical concerns. Facilitating this flexibility are platforms like Hugging Face, which dramatically simplify the discovery, comparison, and acquisition of various models. A key enabler of this agility is the prevalence of standardized Application Programming Interfaces (APIs) across both open-source and closed-source LLMs. These APIs abstract the underlying model complexity, hiding the underlying details with a more straightforward interface, so that updating an application's LLM primarily entails modifying which LLM it points to. This architectural design considerably reduces the technical overhead associated with integrating a different LLM. Moreover, a robust ecosystem of software libraries provides even higher levels of abstraction, rendering model changes remarkably simple. For example, popular tools like LangChain enable model swaps through the modification of merely a few lines of configuration code.

Despite market assurances against the concentration of AI power, as well as open-source initiatives and governmental efforts, many countries still advocate for AI regulation through licensing boards and audits. Proponents often frame AI as a “utility,” akin to water or roads. However, the rationale for regulating traditional utilities is not relevant to AI. While both exhibit substantial upfront capital investments and economies of scale with low marginal costs per additional user, the distinctions far outweigh this superficial similarity. Unlike water or roads, LLMs are not a fundamental necessity.

Furthermore, having multiple LLMs delivering services to the same location is not inefficient; indeed, many software applications strategically leverage diverse LLMs to capitalize on their individual strengths. Crucially, the demand for LLMs is far more elastic than that for basic utilities, as users can relatively easily switch to alternative services. Lastly, unlike the largely uniform services like electricity, AI offerings vary widely.

Proponents of AI regulation should recall the rapid historical shifts in tech, where even dominant companies in critical industries are often overtaken by open-source initiatives. At the turn of the 21st century, many tech sub-industries, particularly those providing essential infrastructure, appeared to be controlled by oligarchies. However, over the last 25 years, the web server market transformed from one dominated by Microsoft and Sun to being nearly entirely led by open-source Apache HTTP Server and nginx. In relational databases, though Microsoft SQL Server and IBM's DB2 persist in legacy systems, new initiatives overwhelmingly utilize open-source MySQL and PostgreSQL.

Closed programming languages have also lost substantial market share, with open-source alternatives now powering most new applications, leading to faster innovation and richer ecosystems. The notable exception to the decline of closed alternatives remains operating systems, with Windows still a vast presence, though open-source Linux is a formidable and growing player, even within Microsoft's offerings. The nascent LLM industry exhibits a similar trajectory, with large companies initially setting the market, only to face increasingly intense competition and often being surpassed by open-source alternatives.

The inherently online and transnational character of LLMs provides significant resilience against government overreach and the cronyism that could perpetuate oligarchical market structures. The global nature of LLM development means users frequently remain unaware of a model's national origin, with many projects being truly international collaborations. This decentralization fundamentally disarms the potential for user exploitation by monopolistic firms. It is simply infeasible for such a diverse array of stakeholders, ranging from highly profitable corporations to independent open-source initiatives and various governmental projects, to collude effectively to eliminate competition. While a specific nation might attempt to block access to all dissenting LLMs, such an action would mirror a government's suppression of independent news sources, indicating a far more profound societal crisis.

Rachel Lomasky is Chief Data Scientist at Flux, a company that helps organizations do responsible AI.

10:13
1x
10:13
More articles

How Reading Augustine Dispels Ideological Illusions

Pursuit of Happiness
May 30, 2025

The Academic “Resistance” Decries Government Interference

Politics
May 29, 2025
View all

Join the newsletter

Receive new publications, news, and updates from the Civitas Institute.

Sign up
More on

Economic Dynamism

Partisan Trust in the Federal Reserve

This paper examines partisanship in public perceptions of the Federal Reserve.

Carola Binder, Cody Couture, Abhiprerna Smit
Economic Dynamism
Apr 22, 2025
The American Dream Is Not a Coin Flip, and Wages Have Not Stagnated

This paper challenges the prevailing narrative that stagnant wages are causing the American dream to fade. It contrasts subjective public opinion with revised objective intergenerational mobility measures.

Scott Winship
Economic Dynamism
Mar 6, 2025
Political Economy and the Rise of Commercial Humanism

Western attitudes toward commerce have transformed from early moral condemnation to a modern appreciation that sees trade as socially beneficial.

Erik Matson
Economic Dynamism
Feb 28, 2025
Why Failure-to-Market Claims Are Preempted Under Federal Law

A California appellate court invented out of whole cloth a new and troubling theory of tort liability.

Richard Epstein, Benjamin Flowers
Economic Dynamism
Feb 5, 2025
No items found.
A Most Favored National Policy for Pharma Drugs Makes Sense

"Most favored nation" pricing can help make America's pharmaceutical marketplace more competitive.

Dirk Mateer
Economic Dynamism
May 19, 2025
The High Cost of California’s Green Energy Policies

California can only prosper if it can develop affordable, reliable energy from all sources, including the state’s fossil fuel supplies.

Joel Kotkin
Economic Dynamism
May 7, 2025
A Bad Business on the Bayou

Chevron finds itself the victim of a political alliance between the tort bar and Louisiana Republicans.

Michael Toth
Economic Dynamism
Apr 1, 2025
Congress Must Shield US Companies from European Regulations

Congress should exercise its constitutional powers over foreign commerce to guard American companies against overregulation by the European Union.

Michael Toth
Economic Dynamism
Mar 27, 2025

Can the U.S. Defense Industrial Base Meet Today’s Challenges?

Economic Dynamism
May 13, 2025
1:05

Virginia Postrel and Adam Thierer on Big Trends and Big Ideas in Dynamism

Economic Dynamism
Apr 29, 2025
1:05

Joel Mokyr on American Dynamism vs. Techno-pessimism

Economic Dynamism
Apr 29, 2025
1:05

Arthur Herman on Mobility, Markets, and Natural Law

Economic Dynamism
Apr 29, 2025
1:05

Dignity and Dynamism: The Future of Conservative Technology Policy

Economic Dynamism
Mar 5, 2025
1:05
No items found.
No items found.
Reagan’s Trade Doctrine

Reagan sought to reinvigorate the American economy by helping American workers, not by punishing American consumers for buying foreign goods.

David Hebert, Marcus Witcher
Economic Dynamism
May 26, 2025
Trump's Rhetoric May Unconstitutionally Sink the Pharmaceutical Industry

Trump's pharmaceutical price controls would wreak havoc.

Richard Epstein
Economic Dynamism
May 23, 2025
Vision of the Newly Anointed

Incoherence as a basis for fiscal policy.

Veronique de Rugy
Economic Dynamism
May 13, 2025
Google Under Fire

Will the judicial effort to aid Google’s direct competitors in the search and ads markets compromise the welfare of its consumers?

Richard Epstein
Economic Dynamism
May 13, 2025
No items found.