As published in The Information
At age 83, looking at the world today is challenging. There's so much going on every moment with the volume turned up to unheard levels. To gain perspective, I physically and spiritually moved to a small town in South Dakota. I start and end my days with spectacular sunrises and sunsets. The weather each day often presents 40 mph winds at the top of our hill—the air is clear.
The Signal and the Noise
From this vantage point, patterns emerge that were invisible in the cacophony of coastal tech centers. The industry I've watched evolve over five decades now moves at a pace that would have been inconceivable when I was building my first companies. Yet the fundamental question remains unchanged: Are we building tools that empower humans, or systems that disrupt them?
The answer, I've come to believe, depends entirely on where you're standing when you ask the question.
The Disruption Narrative
Silicon Valley sells disruption like prairie towns once sold snake oil—as a cure-all for every societal ailment. "Move fast and break things" became not just a motto but a moral imperative. From my hill in South Dakota, watching another sunrise paint the endless sky, I wonder: What exactly are we breaking, and can we put it back together?
The AI revolution exemplifies this tension perfectly. Every major tech company races to deploy systems that can outthink, outwrite, and increasingly outperform humans at tasks we once considered uniquely ours. The narrative is always the same: This will democratize access, level playing fields, empower the powerless. But disruption, by definition, leaves debris in its wake.
The Empowerment Reality
Here's what I've learned building companies for half a century: True empowerment doesn't announce itself with fanfare. It arrives quietly, like morning light across the prairie, illuminating possibilities that were always there but hidden in shadow.
My own journey with AI safety infrastructure—building systems to verify and validate what these powerful models tell us—revealed something profound. The most empowering technologies aren't necessarily the most powerful ones. They're the ones that augment human judgment rather than replacing it.
Consider the paradox: We've built AI systems so sophisticated that we need other AI systems to check their work. I've dedicated my recent years to this very problem with H-LLM MULTI-MODEL, creating what amounts to a "trust layer" for artificial intelligence. The irony isn't lost on me—using technology to protect us from technology.
The Desktop Paradox
Here's a question that crystallizes our moment: How many AI-powered tools are there to help me on a desktop computer? The answer is both staggering and meaningless.
The Avalanche
By my count, there are over 10,000 AI-powered applications available today—and that's being conservative. OpenAI's GPT store alone hosts thousands. Add in Claude, Gemini, Perplexity, Poe with its dozen models. Every major software company has retrofitted AI into their products—Microsoft's Copilot spans the entire Office suite, Adobe's Firefly touches every Creative Cloud app, Notion, Grammarly, Canva, even Zoom wants to summarize your meetings with AI.
Then there are the specialized tools: 500+ AI writing assistants, 300+ AI image generators, 200+ AI code copilots, 100+ AI voice synthesizers. GitHub reports over 3,000 repositories tagged "AI assistant." Product Hunt launches 50 new AI tools every single day.
The Subscription Tax
And here's the kicker: roughly 85% of them want a subscription. $20 here for ChatGPT Plus, $20 there for Claude Pro, $30 for Midjourney, $10 for Perplexity, $12 for GitHub Copilot. The average power user could easily rack up $500 monthly just keeping pace with the "essential" AI tools.
We've created a new monthly tax on productivity—a subscription sprawl that would make cable companies jealous.
The Data Harvest
But here's the part that should keep you up at night: Nearly all of them—I'd estimate 95%—have terms of service that grant them rights to your data that would have been unconscionable just a decade ago. They train on your inputs. They analyze your patterns. They share your usage with "partners" and "affiliates." Some explicitly state they can use your conversations for model improvement. Others bury it in subsection 14.3.2 of their privacy policy.
Read the fine print on any AI coding assistant—your proprietary code becomes their training data. Upload a document for AI editing—it joins their corpus. Ask for help with sensitive business strategy—congratulations, you've just contributed to the collective intelligence that your competitors can access tomorrow.
The Content Pollution Engine
And then there's the amplification problem. At least 2,000 of these tools aren't just AI assistants—they're content distribution engines. Buffer, Hootsuite, Later, Sprout Social, now all AI-powered. Jasper to Zapier, connecting your AI-generated content to every platform simultaneously.
One prompt can spawn a thousand posts across Facebook, Twitter, LinkedIn, Instagram, TikTok, Medium, Substack, and platforms you've never heard of. Write once, pollute everywhere. That's the promise. For $49/month, you can be omnipresent. Your AI-generated thoughts, mass-distributed before you've had time to think whether they were worth sharing in the first place.
The Detection Arms Race
And now for the ultimate absurdity: there are already over 300 apps just to detect AI-generated content. GPTZero, Originality.ai, Copyleaks, Writer, Sapling—each claiming to spot what the others miss. Universities subscribe to one, publishers to another, companies to a third.
We're building AI to create content, AI to distribute it, and AI to detect that it was AI all along. It's an arms race where we're supplying both sides. The detection tools chase the generation tools, which evolve to evade detection, which forces better detectors, which drives craftier generators. Another $20/month to know if the $20/month tool you're using can fool the other $20/month tool your professor is using.
We've created an entire economy of machines checking machines' homework.
The Broken Machines
I've written about manifestos, about the topsy-turvy state of our politics, and the big picture of our economy. From this prairie perch, the dysfunction becomes startlingly clear. Our health and government portions of GDP are—and I don't want to be dramatic, but they are way out of whack. We've built systems so complex that they consume more resources maintaining themselves than delivering their intended services. Healthcare that bankrupts the healthy. Governance that governs mainly itself.
Our wealth redistribution system is broken. The most obvious solution—a growing young segment of the population—is being dismantled by scare tactics. We demonize immigration while our demographic pyramid inverts.
The irony is palpable: We have AI systems that can diagnose disease, model economies, and optimize supply chains, yet we can't figure out how to ensure basic prosperity for the majority of humanity. We're building artificial intelligence while neglecting human intelligence.
The Prairie Solution
Here's what I've learned from my neighbors on the prairie: Farmers and ranchers in the grand old days could do it all. Mechanical work on their tractors, electrical repairs in their barns, crop rotation science, animal husbandry, and still make it to church on Sunday with their families. They were self-sufficient not by choice but by necessity—the nearest specialist was a day's ride away.
Now you can achieve that same self-sufficiency in the digital realm. Write your own code. Create your own apps. Skip the subscriptions, the data harvesting, the content pollution, the detection arms race. You don't need 10,000 tools when you can build the three that actually matter to you.
This is the path I chose with H-LLM MULTI-MODEL. Not another app in the app store, but a trust layer you control. No mysterious terms of service. No data mining. No monthly tribute to Silicon Valley. Just a tool that does what it says: validates AI outputs so you can trust what you're building.
The Choice
Disruption or empowerment? After 10,000 AI tools, 85% subscriptions, 95% data harvesting, 2,000 content polluters, and 300 detection apps, the answer becomes clear: True empowerment isn't adding another app. It's knowing when to stop adding apps. It's the confidence to say "I can build what I need"—just like those farmers who kept their own tractors running and their own accounts balanced.
At 83, I won't see how this story ends. But I've seen enough beginnings to know that the ending isn't predetermined. We're writing it now, line by line, decision by decision, code by code.
Tomorrow's Sunrise
Each morning, as the sun breaks the horizon and the wind picks up, I'm reminded that some things remain constant despite all our disruptions. The sun rises. The wind blows. Humans seek purpose, connection, and the chance to leave things better than we found them.
The question isn't whether AI will disrupt our world—it already has. The question is whether we'll direct that disruption toward genuine empowerment. Will we build systems that amplify human potential or ones that diminish it? Will we create tools that serve us or masters we serve?
Not more tools to think for us, but the skills to think for ourselves. Not AI that replaces human judgment, but understanding that enhances it. Not disruption that breaks what works, but innovation that fixes what's broken.
That's the choice before us, as clear as a South Dakota sunrise and as urgent as the prairie wind. And sometimes, the most revolutionary act is simply refusing to join the revolution—choosing instead to build something small, something real, something yours.