The AI Paradox

Why tech's biggest spenders may be building fragility

Brian Demsey | Published in The Information | 2025

← Back to Articles

As published in The Information

A new framework reveals which AI companies are positioned to thrive in chaos—and which are optimizing themselves toward catastrophic failure.

The four largest tech companies plan to spend $320 billion on AI in 2025—a 44% increase from last year. Microsoft, Alphabet, Amazon and Meta are in an arms race, each betting tens of billions that scale and speed will determine who dominates artificial intelligence.

But a framework borrowed from risk theorist Nassim Nicholas Taleb suggests they may be making a catastrophic error.


The Fragility Thesis

Taleb's concept of "antifragility" describes systems that gain from disorder and volatility rather than merely resisting it. Applied to the AI industry, it reveals a troubling pattern: the companies spending the most are often building the most fragile business models, while quieter players are constructing engines that strengthen with each market disruption.

The implications are material. In interviews with investors, analysts and industry executives, a consensus emerged that the current spending trajectory is unsustainable without corresponding revenue growth—and that growth remains elusive for many participants.

"Big Tech firms are spending billions on AI infrastructure but not yet reaping equivalent revenue," one analysis noted. If CFOs become more cautious in 2025-2026, the phase where AI project spending faces scrutiny could dramatically dampen growth for the largest investors.


The Enterprise Inversion

The clearest example of the fragility-antifragility divide exists between OpenAI and Anthropic.

OpenAI commands approximately 700 million weekly ChatGPT users and an estimated $10 billion in annual revenue. Its consumer dominance seems unassailable. Yet the company's market share in generative AI chatbots has declined from 76% in January 2024 to 59.5% today—a sign that consumer attention is fragmenting.

Anthropic, by contrast, generates approximately $211 per monthly user compared to OpenAI's $25 per weekly user—an 8x monetization efficiency advantage, according to industry data. The company hit $4 billion in annualized revenue by June 2025, quadrupling from $1 billion just six months earlier.

More critically, Anthropic now holds 32% of enterprise large language model market share by usage, surpassing OpenAI's 25%. In coding—the highest-value enterprise use case—Anthropic commands 42% market share.

The distinction is structural. Consumer AI usage is vulnerable to free alternatives and efficiency breakthroughs. Enterprise contracts involve lengthy procurement cycles, custom integrations and switching costs measured in millions of dollars. When a company embeds Claude into its development workflow across 10,000 engineers, replacing it requires retraining staff, rebuilding integrations and risking productivity losses.

One enterprise customer interviewed described spending seven months integrating Anthropic's models into their internal systems. "We're not switching unless something is catastrophically wrong," the executive said. "The cost isn't the subscription—it's the engineering time."


The Infrastructure Exception

Nvidia presents the clearest case of genuine antifragility in AI. The company's fiscal 2025 revenue of $130.5 billion represented 114% growth, and it became the first company to surpass a $4 trillion market cap.

Moreover Nvidia's position is antifragile for a counterintuitive reason: increased competition between AI labs increases demand for its chips. Every new foundation model, every efficiency breakthrough, every startup racing to build better AI requires more compute during development. The DeepSeek announcement—which initially triggered a $600 billion market cap drop for Nvidia—ultimately validated this thesis, as the company's stock recovered within weeks.

CFO Colette Kress told analysts the company expects between $3 trillion and $4 trillion in AI infrastructure spending by the end of the decade. Nvidia doesn't need to predict which AI approach wins; it sells to all of them.

Amazon Web Services occupies a similar position. Rather than building consumer-facing AI products, AWS provides infrastructure that benefits regardless of which models or applications succeed. The company is spending $100 billion in 2025, with the vast majority directed toward AI infrastructure, according to CEO Andy Jassy.


The Incumbent's Dilemma

Google faces perhaps the starkest fragility risk. The company plans to invest $75 billion in AI in 2025 and generated $71 billion in advertising revenue in Q2 2025. But AI-powered search represents an existential threat to its core business model. Each query answered by an AI chatbot is a query that doesn't generate ad revenue. Google's own AI Overviews now support over 2 billion monthly users and drive more than 10% of global search queries—cannibalizing traditional search even as the company builds it.

The company is trapped in what strategists call the innovator's dilemma: it must build the technology that disrupts its most profitable business, but doing so accelerates its own decline in traditional search advertising. One analyst who covers Google described the challenge: "They're spending $75 billion to protect a business that AI is fundamentally undermining. That's not antifragile—it's the definition of fragility."


The Optionality Play

Microsoft presents the most sophisticated positioning. The company has committed $80 billion to expanding Azure in 2025 and maintains a $368 billion contracted backlog with 98% recurring revenue.

Its enterprise-first model provides multiple forms of optionality:

The company's Copilot family serves over 100 million monthly active users and is used by 70% of Fortune 500 companies. With over 430 million paid M365 commercial seats and initial Copilot pricing of $30 per seat monthly, penetration remains in early stages.

Perhaps most importantly, Microsoft's exposure to AI volatility is limited. The company can reduce AI investment without imperiling its core business—the definition of having options without obligations.


The Meta Paradox

Meta's strategy is the most unconventional: spending $60-65 billion in 2025 while open-sourcing its Llama models, receiving no direct AI revenue.

CEO Mark Zuckerberg has pledged to spend "hundreds of billions" more on AI over the long term. The company's ad revenue rose 21% to $47 billion in Q2 2025, with AI improving ad targeting and conversions—demonstrating that AI investments are already paying off in the core business.

Meta's open-source approach creates antifragility through ecosystem dependency. The company doesn't need Llama to be the best model; it needs the industry to fragment enough that no competitor can establish a monopoly. If Meta had to pay a competitor for AI capabilities, it would face the same dependency that plagued it when Apple controlled the mobile ecosystem.

By commoditizing AI through open source, Meta ensures no rival can extract monopoly rents while improving its own products through community contributions. It's a classic antifragile play: small cost, uncapped upside, limited downside.


The Efficiency Threat

The February 2025 announcement of DeepSeek's R1 model—reportedly trained at 70% lower cost than comparable U.S. models—triggered the most significant test of the fragility thesis.

Nvidia's market cap dropped $600 billion in a single day. The logic was straightforward: if AI can be built more efficiently, demand for expensive compute infrastructure would decline. But within weeks, the thesis reversed. Industry executives argued that efficiency breakthroughs would democratize AI development, creating more competitors and more demand for infrastructure. Nvidia's stock recovered.

The episode revealed a critical distinction: companies selling directly to end-users (whether consumers or enterprises) face efficiency threats. Companies selling infrastructure to developers benefit from efficiency because it expands the market.


What the Data Shows

Analysis of retention rates reveals which companies have built antifragile talent models. Anthropic leads the AI industry with an 80% retention rate for employees hired over the last two years. DeepMind follows at 78%, while OpenAI trails at 67%—closer to large tech companies like Meta at 64%.

High retention in a competitive talent market suggests employees believe in long-term viability. Anthropic employees cite intellectual discourse and researcher autonomy as key factors, according to industry research. Separately, traffic data shows the consumer-enterprise divide. While ChatGPT traffic was 50x Claude's in April 2025, Anthropic's revenue surge demonstrates that enterprise demand can drive significant revenue growth independent of consumer adoption.


The New Entrants: Pre-Revenue Fragility vs. Specialized Resilience

The contrast between established players and new entrants reveals the fragility thesis in stark relief.

Thinking Machines Lab, founded by former OpenAI CTO Mira Murati, closed a $2 billion seed round at a $12 billion valuation in July—the largest seed round in venture capital history. The company attracted a roster of OpenAI alumni including John Schulman, Barret Zoph, and Luke Metz.

It has no product and no revenue.

The valuation rests entirely on team pedigree and the belief that former OpenAI leaders can replicate their success. But Thinking Machines faces structural fragility: it's entering a market where competitors have years of head start, hundreds of billions in capital already deployed, and established distribution. The company must achieve breakthrough innovation—not incremental improvement—to justify its valuation.

One AI investor described the dynamic: "You're betting $12 billion that this team can build something so superior that enterprises will rip out their existing integrations. That's not antifragile—it's the highest-risk bet in the industry."

Perplexity AI presents a different fragility profile. The company raised $200 million at a $20 billion valuation, with annual recurring revenue approaching $200 million. It processed 780 million queries in May 2025, growing more than 20% month-over-month.

But Perplexity's consumer-facing search model makes it vulnerable on multiple fronts. The company faces legal scrutiny over copyright infringement allegations from BBC, Dow Jones, and The New York Times—challenges that could fundamentally undermine its business model. More critically, it competes directly against Google's distribution and free AI search features.

The company is pursuing device partnerships with Samsung and Motorola to build distribution, but these deals put Perplexity at the mercy of hardware manufacturers' strategic priorities. Consumer search habits could shift with a single Google product update.

Elon Musk's xAI reportedly raised $10 billion at a $200 billion valuation in September 2025, led by Middle Eastern investors and sovereign wealth funds. The company benefits from integration with X (formerly Twitter) for data and distribution, and can leverage Tesla's engineering resources.

But xAI's valuation relative to traction suggests extreme fragility. The company's fortunes are tied to Musk's reputation and political controversies—factors that introduce volatility without providing the benefit characteristic of antifragile systems.


The Infrastructure Exception: Scale AI

Scale AI represents a different model entirely. The company received $14.3 billion from Meta, accounting for 73% of all funding to companies valued above $5 billion in the first half of 2025.

Scale provides data labeling, evaluation services, and infrastructure that all AI companies require regardless of which models dominate. Like Nvidia, Scale benefits from AI model proliferation. More competitors means more demand for evaluation and data services. The company has secured enterprise contracts with defense and government agencies—customers with long procurement cycles and high switching costs. This positions Scale as genuinely antifragile: it gains from industry chaos while maintaining stable enterprise revenue.


The Vertical Specialists: Healthcare and Legal AI

A cluster of specialized AI companies reached unicorn status by targeting specific high-value verticals:

Abridge raised $300 million at a $5.3 billion valuation for healthcare AI documentation. Harvey raised $300 million at $5 billion for legal AI tools. Hippocratic AI raised $141 million at $1.6 billion valuation for healthcare models.

These companies occupy a middle ground on the fragility spectrum. They benefit from domain expertise and regulatory moats that create switching costs once integrated into clinical or legal workflows. A hospital that embeds Abridge into its documentation system faces significant friction changing providers.

But vertical specialists face their own fragility: if foundation models commoditize domain capabilities through better general-purpose reasoning, specialized providers lose their differentiation. The question is whether domain expertise and workflow integration create enough defensibility before commoditization occurs.

One healthcare AI investor explained: "You're betting that clinical workflow integration is stickier than model capabilities. If GPT-7 can do medical documentation as well as a specialized model, what's the moat?"


The Path Forward

The AI industry's current spending trajectory presents a paradox: the largest investments may be creating the greatest vulnerabilities.

Companies optimizing for consumer scale face fragility from free alternatives and efficiency breakthroughs. Those betting on infrastructure benefit from ecosystem chaos. Those securing enterprise contracts build sticky, defensible revenue. Those raising billions pre-product are making the ultimate fragile bet: everything depends on execution against entrenched competitors.

In conversations with more than a dozen investors and industry executives, a pattern emerged: the highest conviction bets aren't on which model will win, but on which business models will prove antifragile.

"We're not asking 'who builds the best model,'" one venture investor said. "We're asking 'who has a business that gets stronger when everyone else's models get better.'"

By that measure, the industry's biggest spenders may be building precisely the wrong thing: optimized systems that excel under current conditions but shatter when conditions change. The newest entrants—raising billions before proving product-market fit—may be building even greater fragility.

The question isn't whether AI will transform the economy—it's whether today's leaders will survive the transformation they're creating, and whether tomorrow's challengers have built businesses resilient enough to reach tomorrow.

Brian Demsey is the founder and CEO of Hallucinations.cloud LLC, an AI safety company focused on multi-model truth verification. He has over fifty years of experience in enterprise technology.