
America's Litigation Addiction Threatens Its AI Leadership
The bill for choosing to regulate AI wrong won't be paid in courtroom defeats but in innovations foregone and opportunities unrealized.
It is tempting to entrust artificial intelligence (AI) governance primarily to courts, allowing tort law to evolve organically through case-by-case adjudication. In theory, that promises accountability without the rigidity of prescriptive rules, adaptation without bureaucratic ossification. In a world where courts can resolve disputes expertly and efficiently, and litigants file meritorious suits grounded in sensible rules, this approach is hard to contest. Parties injured by bad AI actors can seek recourse. AI innovators attentive to the law and the well-being of their users can continue to push the technological frontier. Well-intentioned individuals and stakeholders have filed suits based on this ideal conception of litigation. Perhaps most prominently, parents of teens who took their lives following extensive AI use are suing AI companies based on allegations that the AI tools exacerbated mental health crises to tragic ends.
But this general approach rests on a fundamental misunderstanding of what our civil justice system has become, and an even deeper confusion about what drives technological innovation and safer, higher quality goods and services. Litigation is anything but efficient and, if state lawmakers proposing new AI liability schemes have their way, will increasingly involve disputes based on vague laws and open-ended theories of harm. The net result is that our markets are frozen, delaying the introduction of superior products. America’s modern tort system has become what the liability expert Peter Huber once famously labelled a sort of “uncommon law” that imposes a massive “litigation tax” on society and fails to accomplish its underlying safety objectives. Yale legal scholar George Priest went further, arguing that America’s civil courts “have become the most powerful regulatory institution of the modern state” and have diminished societal wealth and welfare overall in the process. At an even higher level, America exemplifies the errors that stem from juridification–a word known to people like Dan Wang and J. Storrs Hall who have studied the troubling tendency of our country turning to law and lawyers to “solve” problems with litigation instead of engineering and innovation.
Lawsuits often only line the pockets of lawyers, so much so that even the “winning” party is often left with little benefit after paying attorney fees and tallying the opportunity cost of having spent months, if not years, in court. In this judicial environment, the courts are an unreliable source of the sort of iterative policy approach needed to govern fast-moving technologies like AI systems. Overzealous litigation risks undermining the profound life-enriching benefits that could come from AI and advanced computational systems.
Thankfully, an alternative system that punishes bad actors and rewards responsible innovators remains available. It’s no secret that competitive markets with informed consumers and low barriers to entry have long spurred economic and social well-being. Yet, being pro-market has fallen out of fashion among too many policymakers who conflate support for capitalism with support for corporations. A reorientation is necessary and overdue as the future of AI faces severe threats from calls for judicial intervention and state-by-state regulation.
How Torts Went Wrong
Before we delegate AI governance to litigation, we should confront an uncomfortable reality: the American tort system has evolved into something remarkably ill-suited for the task advocates envision — namely, protecting consumers and rewarding responsible innovation. Between 2016 and 2022, tort costs grew at seven percent annually — nearly double GDP growth — yet seventy to eighty cents of every dollar funds lawyers and administrative overhead rather than compensating injured parties. Altogether, the United States has an astonishing 1 lawyer for roughly every 250 citizens, among the highest proportion in the world. Because America’s litigation-heavy legal system lacks a “loser-pays” rule and is brimming with junk science, frivolous lawsuits often threaten to undermine many important innovations — especially newer, disruptive technologies that would improve societal choices and welfare. Worse yet, more than one-quarter of all the lawyers in the country are in just two states: New York and California, which also happen to be the two states currently proposing the most expansive AI regulations and liability schemes.
The procedural landscape compounds the dysfunction. Standing doctrine has grown increasingly hostile to the intangible, probabilistic harms AI often generates — reputational damage from algorithmic errors, privacy violations, discriminatory outcomes in automated decisions. Arbitration clauses now divert a myriad of consumer disputes away from courts entirely. Class certification requirements make it incredibly difficult to aggregate diffuse harms that may occur in response to new technology. For those who view courts as the preferred forum for resolving these disputes, it is decreasingly likely that injured parties will find viable recourse there.
These developments bring to mind what George Priest once described as the “extraordinary transformation in the law” that occurred in the 1960s and 70s, which ultimately created an entirely new “culture of tort law.” Priest identified how “the principal effect of the change in legal culture has been redistributive, not productive” and called it “a most unfortunate development for our country — really for the productivity of the world — because many other countries are affected by our legal culture.” He explained how the radical metamorphosis of tort law has tended to “diminish gains from economic growth that would add longevity and enjoyment to life.”
The dangers of this tort racket now loom large for AI developers. In practice, for many innovators, this means that many AI-related claims will never reach the courts. Claims will be based on amorphous theories of harm intended to preemptively tie up developers and force them to either remove products from the market or settle frivolous claims preemptively to avoid protracted legal battles and public relations headaches. The claims that survive will face years of expensive discovery battles over proprietary systems, expert witness disputes about causation in multi-actor supply chains, and appeals that span multiple jurisdictions. By the time any precedent emerges, technology has evolved beyond recognition — or been driven off the market altogether.
Regulation by litigation isn't adaptive common law evolution — it's selection bias producing fragmentary, delayed signals that provide neither clear accountability for bad AI firms nor useful guidance for well-intended innovators. It is also likely to drive the new and smaller AI innovators America needs straight out of the market because only the largest players with the deepest pockets and biggest legal teams will be able to handle the avalanche of litigation and settlement shakedown rackets. Contemporary litigation is a grinding, expensive process that devours resources while often delivering neither timely guidance to innovators or meaningful relief to those harmed. It has also had negative implications for economic growth and national competitiveness, as Priest suggested.
As long as the smartest business strategy is to play it safe by releasing products unlikely to lead the firm to lawsuits, relying on tort law as a regulatory tool risks weakening AI markets and the investment and innovation America needs to stay a global leader in advanced technology and a provider of life-enhancing algorithmic innovations that can benefit the public in many ways.
The Causation Maze and Innovation Friction
There's a practical problem tort advocates rarely address: AI systems involve complex, distributed supply chains. Foundation models are built by research labs, fine-tuned by specialized firms, deployed by platforms, and integrated into countless downstream applications. When something goes wrong, determining which actor breached which duty can become a complex and technical legal labyrinth. This uncertainty doesn’t just complicate liability after the fact; it also distorts innovation before it even begins. Companies cannot rationally assess risk when legal exposure depends on how future courts might apportion responsibility across multiple actors. The result isn't careful innovation. Instead, this system will result in defensive design, inflated insurance costs, and an ever-greater advantage for incumbents who can afford teams of lawyers and endure legal suits.
This reflects a more profound misunderstanding at the heart of tort-first thinking: the assumption that adversarial adjudication can substitute for genuine market discipline. Real market discipline — the kind that encourages responsible development while preserving space for experimentation — requires things courts cannot supply: transparent information, clear baselines, and rapid feedback loops. Litigation, by contrast, is opaque, delayed, and backward-looking. It reveals little about how to avoid the next problem and often buries useful information under confidentiality agreements and procedural complexity.
For consumers, that means fewer opportunities to make informed choices about risk. Waiting years for litigation to expose a product’s safety record or design flaws does not empower users — it obscures the very information they need to exercise judgment. For developers, it means an absence of clear, ex ante signals about what constitutes responsible practice. Not prescriptive rules about architectures or algorithms, but intelligible baselines: transparency requirements scaled to risk, testing protocols for high-stakes applications, and incident reporting that fosters collective learning.
Contrast this with frameworks that clarify governance and accountability up front—not through heavy-handed mandates, but through adaptive, collaboratively developed standards. Industry working groups can define best practices; model contracts can allocate responsibility across supply chains; and voluntary certification schemes can signal compliance. Courts can then enforce these standards in discrete disputes rather than invent them from scratch through years of expensive litigation.
That’s how real markets self-correct: through open information and credible signals that allow consumers to reward trustworthy developers and discipline careless ones. Tort litigation, with its opacity, delay, and fragmented judgments, provides the opposite. It doesn’t create market discipline — it corrodes it.
Learning from What Actually Works
The most dynamic periods of American innovation haven't relied on tort litigation to set boundaries. They've featured transparent operating rules, low barriers to entry, and mechanisms for rapid feedback and adjustment.
Open standards enabled internet protocols to flourish. Voluntary safety testing accelerated automotive innovation. Third-party certification gave consumers confidence in novel technologies. In each case, clear baselines — developed collaboratively rather than imposed bureaucratically — created the informational infrastructure that let markets reward good actors and discipline bad ones.
Importantly, Congress took the time to calibrate public policy toward the internet and the digital sphere in themid-1990s and dealt explicitly with the potential dangers of confusing or zealous litigation in the online sphere. Section 230 of the 1996 Telecommunications Act ensured that digital platforms would not automatically be held liable for the content and communications posted by others on their networks. Had Congress instead allowed litigation to run wild by holding digital intermediaries liable for every problem under the sun, that would have discouraged innovation, undermined competition from small players, and chilled free speech.
Better yet, after President Bill Clinton signed the Telecom Act into law, the White House released its 1997 Framework for Global Electronic Commerce, which stressed the need to “avoid undue restrictions on electronic commerce” and allow the digital world to develop as “a market driven arena not a regulated industry.” The Clinton vision was one of bottom-up governance and reliance on contractual negotiations, voluntary agreements, multistakeholder processes, and ongoing marketplace experiments. The Framework recommended “minimal government involvement or intervention,” but where intervention was needed, the document said, “its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce.”
America became the global leader in computing and digital services precisely because this light-touch legal regime encouraged innovation and competition as the first way of improving outcomes, not regulation and lawsuits, and then focused on market-based negotiations or government and industry collaborations to solve more complex problems. A 2025 report from the Interactive Advertising Bureau found that the digital economy had doubled in size since 2020, reaching$4.9 trillion, which represents 18 percent of the U.S. gross domestic product. The digital sector also supports 28.4 million jobs, and employment is growing12 times faster than the broader labor market. Section 230’s liability shield was a crucial part of that success story. Equally important was the explosion of information platforms and speech-related opportunities that the public now enjoys thanks to Section 230’s liability shield. Had the tort racket wrecking ball torn through the digital landscape, it is unlikely these gains would have occurred.
The Looming Liability Nightmare for AI
Yet, Section 230’s future is uncertain, especially as policymakers debate how it applies to generative AI. Some have warned that “the AI industry is steaming toward a legal iceberg, ”and more lawsuits have been flying over the past year since that story was penned. To be clear, some liability for AI-related harms will be necessary —for instance, when defective systems cause tangible injuries — but broad exposure for every controversial algorithmic output would drown innovators in a litigation swamp.
That would be a disaster for algorithmic systems if the litigation floodgates are opened now. It would undermine the remarkably innovative, open, and competitive nature of the current AI marketplace.
We’ve seen what happens when litigation and junk science run wild in other contexts. Asbestos, small aircraft, tobacco, and pesticides — the domains where we've relied primarily on litigation for governance delivered neither timely accountability nor efficient deterrence. They produced decades of legal warfare, enormous transaction costs, innovation losses, and fragmentary outcomes that varied wildly by jurisdiction. Whatever compensatory justice emerged came far too late, at tremendous social expense, and without clear guidance that might have prevented subsequent harms.
These are not models to emulate; they are warnings about institutional mismatch. Imagine applying those models to driverless cars, which have the potential to save many lives. It is already the case that, currently, autonomous vehicle (AV) technologies are roughly ten times safer than human drivers. While AV makers can still be subjected to certain federal motor vehicle safety regulations and held liable for product defects that develop, we would not want the courts to unleash more expansive liability schemes based on the idea that these technologies must be perfect before being allowed in society. Holding new algorithmic technologies to a more stringent legal standard than humans—what two scholars refer to as “robophobia”—would result in less societal safety overall. Sometimes, liability needs to be limited for certain new technologies to fully unleash their life-saving potential.
Designing for Dynamism
What would adaptive AI governance look like? Not hasty regulation imposing rigid requirements or tort law run amok, but frameworks that enable informed choice and rapid course correction. Balanced algorithmic governance could incorporate:
Transparency regimes that scale with impact: minimal disclosure for low-stakes applications, robust testing and documentation for high-risk deployments. Not government-dictated architectures, but intelligible information that lets users, purchasers, and downstream integrators make informed decisions.
Regulatory sandboxes that let innovators test novel approaches under supervision, developing evidence about what works before deploying at scale. This creates the empirical foundation for adaptive standards without locking in premature assumptions.
Industry-led standards development, with government providing coordination and baseline principles rather than prescriptive mandates. The aim is to pool learning, establish common expectations, and create certification mechanisms that differentiate between careful and reckless actors.
Accessible redress mechanisms for common disputes deliver rapid resolution without litigation costs. Courts remain available for addressing egregious misconduct and enforcing established standards, but they handle what they do well rather than attempting to create governance frameworks from scratch.
Generally applicable laws that can be tapped when other problems arise. Governments already possess a wide variety of consumer protection laws, civil rights laws, contract and property laws, and more that cover potential harm that develops from algorithmic systems, just like any other technology. Those more adaptive, ex post policies are superior to top-down, technocratic regulations or broad-based new liability schemes.
This governance blueprint isn't heavy-handed intervention. It's building the information infrastructure that lets markets function. This is a drastically superior alternative to proposals at the state and federal levels to impose vague duties of care on AI labs. Proposals like the new “AI LEAD Act,” a federal bill introduced by Senators Dick Durbin (D-IL) Josh Hawley (R-MO), would subject developers to liability absent compliance with indeterminate and open-ended standards. Developers operating under such a system are incentivized to err on the side of being risk-averse and adhering to the most stringent interpretation of the standard in question to avoid liability. Some people are excited by that notion— they’d welcome labs moving slowly and breaking nothing. These individuals are presumably in positions of privilege. They presumably benefit from the status quo and have little to no need for the benefits promised by AI, such as improved healthcare, personalized education, and streamlined legal services. For the rest of us, however, this extreme risk aversion is contrary to a society that aims to provide more people with more tools to exercise greater control over their lives.
The Liberty Interest in Simple Rules
There's a deeper issue that tort advocates often overlook: uncertainty itself can constrain liberty. When developers cannot know what constitutes responsible practice until litigation resolves novel questions years later, they face a Hobson's choice — avoid entire categories of innovation or proceed with unquantifiable legal risk.
This isn't the kind of uncertainty that drives healthy experimentation. It's the kind that privileges incumbents, discourages entry, and channels innovation toward legally defensible mediocrity rather than transformative possibilities.
Clear, adaptive, light-touch governance frameworks — transparent baselines that evolve with evidence — expand rather than constrain the opportunity set. They let entrepreneurs assess risk rationally, compete on merit rather than legal strategy, and focus resources on building rather than defending. That was the winning policy model that Congress and the Clinton administration established in the mid-1990s, and it continues to represent a better approach than waiting for courts to work it out using confusing standards, often inspired by novel theories of liability being floated in some of the most regulatory-minded states today. That isn't preserving flexibility; it represents the most rigid constraint imaginable: profound uncertainty about fundamental operating conditions, resolved only through expensive, lengthy adjudication that produces fragmentary guidance long after the technology has moved on.
Choosing Adaptive Markets Over Adversarial Drift
The choice before us isn't courts versus regulation. It's functional market discipline versus dysfunctional institutional drift.
The bill for choosing wrong won't be paid in courtroom defeats. It will be paid for in innovations foregone, opportunities never realized, and a creeping regulatory uncertainty that advantages incumbents while foreclosing the very experimentation we should be encouraging. That's not a future that serves liberty, dynamism, or the distributed problem-solving that has always been America's competitive advantage.
We can do better, and doing better starts with recognizing that while courts serve essential functions, governing fast-moving technology through slow-moving, arbitrary, and expensive litigation isn't wisdom — it's an institutional mismatch masquerading as principle.
Kevin Frazier is the AI Innovation and Law Fellow at the University of Texas School of Law and co-host of the Scaling Laws podcast.
Adam Thierer is a senior fellow for the R Street Institute’s Technology & Innovation team.
Economic Dynamism

The Causal Effect of News on Inflation Expectations
This paper studies the response of household inflation expectations to television news coverage of inflation.
.avif)
The Rise of Inflation Targeting
This paper discusses the interactions between politics and economic ideas leading to the adoption of inflation targeting in the United States.

Ignore 'Open Letters' From Economists
Don’t be swayed by “open” letters signed by well-known and well-respected scholars, experts, professors, and businessmen.

Demystifying the New Deal
Carola Binder reviews False Dawn: The New Deal and the Promise of Recovery, 1933–1947 by George Selgin

Have Argentinians Finally Had Enough of Peronism's Old Tricks?
After nearly a century of Peronist dominance, Argentinians may finally be ready for real reforms.

Milei's Mandate
Can Milei convert electoral legitimacy into policy reform durability before political patience runs out?






.jpg)



.avif)




