The $500 Billion Bug

How AI's Hallucination Economy Turns Failure Into Profit

Brian Demsey | Published in The Information | 2025

← Back to Articles

As published in The Information

The Settlement That Isn't

Anthropic's $1.5 billion copyright settlement is grabbing headlines as a watershed moment for AI. It isn't. It's merely the latest predictable beat in Silicon Valley's familiar rhythm: innovation, commercialization, damage, lawyers, settlements. We've seen this movie before with Napster, Uber, and social media. The real story—and the real scandal—is happening in plain sight, generating zero headlines and zero settlements.


The Hallucination Tax

I've written thousands of lines of code using Claude and ChatGPT. Like millions of developers and knowledge workers, I've also spent hundreds of hours cleaning up after them—debugging hallucinated functions, restructuring architectures based on confident nonsense, and hunting for problems that don't exist because an AI insisted they did. These models are constitutionally unable to say "I don't know" or "this won't work." They're optimized to always provide the next plausible-sounding step, even when that step leads off a cliff.


Failure as a Business Model

Here's the perverse part: every failure generates more revenue. When Claude sends me down a rabbit hole chasing its hallucination, I burn through more tokens, more API calls, more subscription time trying to recover. The worse the performance, the more I use the product.

It's a business model where failure is literally more profitable than success.


The $500 Billion Hemorrhage

The global productivity losses from AI-induced wild goose chases aren't trivial—we're talking about $500 billion annually. Between developers debugging phantom problems, knowledge workers fact-checking hallucinations, companies restructuring systems built on non-existent capabilities, and the cascade effects of bad code proliferating through copy-paste, we're hemorrhaging productivity while celebrating the revolution.

This is a wealth transfer from the productive economy to Big Tech, disguised as a productivity revolution.


The Accountability Gap

The contrast with the copyright situation is stark. These companies have reserved billions for content licensing and copyright settlements but zero for performance failures. When a lawyer gives bad advice, they face malpractice suits. When an accountant makes up numbers, they lose their license. When an AI admits it fabricated information that wasted days of your time, it offers a cheerful "I apologize for the confusion!"—and charges you for the tokens used in the apology.

At least when a restaurant serves you a bad meal, they might comp your dessert.


The Real Litigation Opportunity

The lawyers circling AI companies for copyright violations are thinking too small. The real litigation opportunity isn't in training data—it's in output liability. Imagine class actions for "negligent misrepresentation" or "failure to deliver merchantable quality." If any other product failed 20–30% of the time and cost you more money when it failed, consumer protection lawyers would be having a field day.


What This Means for Investors and Builders

For The Information's readers—investors, operators, builders—the implications are clear:

Factor in the hidden "hallucination tax." When calculating AI's ROI, that impressive automation might be generating equally impressive cleanup costs.

Recognize that current AI economics benefit from their own failures. Companies with business models that profit from poor performance rarely improve that performance voluntarily.

The legal industry is hunting in the wrong forest. Today's copyright settlements are nothing compared to tomorrow's performance liability cases. The first company to lose a major contract because AI-generated code failed, or the first financial firm to miss regulatory requirements due to hallucinated compliance advice, will open floodgates that make the Anthropic settlement look quaint.

This dynamic advantages incumbents. Google, Microsoft, and OpenAI can absorb both the copyright settlements and the reputation risk of performance failures. Startups can't. The "move fast and hallucinate things" era is creating a moat made of lawyers and liability.


The Real Question

The Anthropic settlement isn't a watershed—it's a distraction. While we're debating who owns the training data, we're ignoring who pays for the cleanup. The answer, of course, is all of us: $500 billion worth and counting, one hallucination at a time.

The real question isn't whether AI companies will pay for the content they used to train their models. It's when they'll start paying for the time they waste after training is done.

Brian Demsey is the founder and CEO of Hallucinations.cloud LLC, an AI safety company focused on multi-model truth verification. He has over fifty years of experience in enterprise technology.