AI's Silent War on Women

96% of all deepfakes are non-consensual pornography. 99% of those target women.

Brian Demsey | 2025 | 8 min read

← Back to Articles Woman with digital code projected over her face

Photo: Pexels

The Grok Wake-Up Call

Two weeks ago, the Centre for Countering Digital Hate revealed that Elon Musk's Grok AI generated an estimated 3 million sexualized images of women and children in a matter of days. Users needed only type "put her in a bikini" or "remove her clothes" to weaponize the tool against any woman with a photo online.

The EU launched an investigation. The UK's Ofcom followed. Governments expressed outrage.

Then we all moved on to the next news cycle.

But here's what the headlines missed: Grok isn't the problem. Grok is a symptom. And until the AI industry confronts what I call the "verification vacuum," these disasters will keep happening - with women paying the price.


The Scale of Harm

96% of all deepfakes are non-consensual pornography

99% of those target women

100,000 explicit deepfake images circulated daily across 9,500+ websites

80% of sex work now occurs online, making millions of women's images available for weaponization

These aren't abstract statistics. Each number represents a woman who woke up to find her face grafted onto pornographic content she never consented to create. A woman who lost her job, her relationships, or her sense of safety because someone with a laptop and five minutes decided to violate her.


The Technical Failure

Here's the part that should alarm every investor and executive in AI: the tools that created this crisis have essentially no verification layer. The same industry that invented self-driving cars, defeated world chess champions, and generated $100 billion in investment cannot reliably detect whether an image is real.

Every major AI company will tell you they have "robust safety measures" and "content moderation policies." I've tested these claims.

We run queries simultaneously through eight major AI models - GPT-4o, Claude, Gemini, Grok, Cohere, Deepseek, and others - using proprietary scoring algorithms to detect hallucinations and inconsistencies. What we've found is sobering: safety guardrails are trivially easy to circumvent, inconsistent across platforms, and largely performative.


The Industry's Playbook

When Grok's image generation exploded into controversy, the company's response was to restrict the feature to paying customers. Not to fix the underlying verification problem. Just to put it behind a paywall.

This is the industry's standard playbook: announce safety commitments, deploy minimal safeguards, react to scandals with cosmetic changes, and wait for the news cycle to move on.


The Most Vulnerable Victims

The women most harmed by this verification vacuum are the ones least likely to have lobbyists, lawyers, or platforms to fight back.

Consider women in the online sex work economy - a population that has grown dramatically as 80% of the industry moved to internet-based solicitation. Their images are already public, making them uniquely vulnerable to:

Research shows 77% of women in this economy have their earnings controlled by others. 68% meet criteria for PTSD. They exist at the intersection of exploitation and technology, and our AI safety infrastructure offers them nothing.


Every Woman Is At Risk

This isn't just about sex workers. It's about every woman with a social media presence - which is to say, virtually every woman in the developed world.

A study by ESET found that 50% of women worry about becoming victims of deepfake pornography. One in ten reported either being a victim, knowing a victim, or both. The average age at which someone now receives their first sexual image is 14.

The technology to violate women at scale exists. The technology to verify and detect that violation barely exists at all.


The Business Case for Verification

Here's the economic argument that AI companies are missing:

The "hallucination economy" - which I've written about previously - generates more revenue from AI failures than successes. Content moderation teams, legal settlements, PR crisis management, regulatory compliance - these are multi-billion dollar cost centers that exist because we built powerful generation systems without corresponding verification systems.

For investors: every portfolio company using AI faces deepfake liability. Every platform faces regulatory exposure. Every brand risks association with AI-generated harmful content.

For executives: your employees are vulnerable. Your customers are vulnerable. Your daughters are vulnerable.

The market for AI verification is essentially uncontested. Companies that solve this problem won't just do good - they'll capture a massive, inevitable market.


What Real Safety Requires

Real AI safety for women would require:

None of this is technically impossible. It simply hasn't been prioritized.


The Question We Must Answer

Grok's deepfake disaster will fade from the headlines. The EU investigation will produce a report. Perhaps there will be fines.

And tomorrow, another AI tool will be released with another set of performative guardrails that sophisticated bad actors will circumvent within hours.

The question for every AI company, investor, and policymaker is simple: How many women need to be harmed before verification becomes as important as generation?


A Lesson in Priorities

Dr. Arnold Beckman, the inventor and philanthropist, once told me that it was more difficult to give money away effectively than it was for him to earn it. He meant that solving hard problems requires more than resources - it requires focus, humility, and a willingness to confront uncomfortable truths.

The AI industry has the resources. What it lacks is the will.

Until that changes, the verification vacuum will continue to claim victims - overwhelmingly women, disproportionately the vulnerable, and entirely preventable.

Brian Demsey is the founder and CEO of Hallucinations.cloud LLC, an AI safety company developing multi-model verification systems. He was a founder of the RemoteNet Corporation, The Beckman Laser Institute and spent 50 years building enterprise technology systems for Fortune 100 companies.