Back to Blog

The Wrapper Is Dead. What Survives the AI Shakeout?

Google's startup chief just declared LLM wrappers and aggregators are facing extinction. The Pentagon is threatening to banish Anthropic over AI usage boundaries. For enterprises building on AI, the question is no longer which model to use — it's whether your AI investment is defensible or disposable.

Two stories broke over the weekend that every enterprise building on AI should read together.

The first is a warning from inside Google. The second is a standoff between the Pentagon and one of the world's most valuable AI companies. Separately, they're headlines. Together, they form the clearest picture yet of what makes enterprise AI investments worth keeping — and what gets wiped out in the next model upgrade.

Google Just Declared the Wrapper Dead

On Friday, Darren Mowry — the VP who leads Google's global startup organisation across Cloud, DeepMind, and Alphabet — told TechCrunch's Equity podcast that two of the most popular AI startup models have their “check engine light” on: LLM wrappers and AI aggregators.

His assessment was blunt. Startups that simply layer a product or user experience on top of foundation models from OpenAI, Google, or Anthropic are running out of time. “If you're really just counting on the back-end model to do all the work and you're almost white-labeling that model, the industry doesn't have a lot of patience for that anymore,” Mowry said. Wrapping “very thin intellectual property around Gemini or GPT-5” no longer constitutes differentiation.

AI aggregators — startups that combine multiple LLMs into one interface or API layer to route queries across models — face similar pressure. These platforms provide orchestration, monitoring, or governance tooling, but Mowry advised founders to avoid the aggregator business entirely. Users increasingly expect proprietary value, not just access to multiple models.

The numbers make the warning urgent. Seventeen US AI companies raised over $100 million each between January 1 and February 17, 2026 — just 49 days. Four days after that count closed, Mowry publicly declared many of those business models structurally unsustainable. An estimated 47 AI startups burned through $2.1 billion in combined funding during 2025 pursuing similar approaches, and two AI unicorns disappeared entirely last year.

Mowry compared the current moment to the late-2000s cloud market, when startups reselling AWS infrastructure were squeezed once Amazon built its own enterprise tools. Only companies offering genuine services — security, migration, DevOps — survived. AI wrappers and aggregators face the same margin compression as foundation model providers expand their enterprise capabilities.

What Actually Constitutes a Moat

Mowry was specific about what differentiates the survivors: “You've got to have deep, wide moats that are either horizontally differentiated or something really specific to a vertical market.”

He cited Cursor (a coding assistant with deep development workflow integration) and Harvey AI (a legal AI tool with domain-specific training) as examples of wrapper-style products that have defensible positions. The distinction isn't whether you use foundation models — nearly every AI product does. The distinction is whether you add proprietary value that the foundation model providers can't easily replicate.

From an engineering perspective, we see this play out every day in how enterprises approach AI deployment. The moats that actually hold are built from four layers.

Proprietary data that compounds. Spotify's co-CEO revealed this month that its music preference dataset — understanding what “workout music” means across cultures, geographies, and individual contexts — is built from 751 million monthly active users and can't be scraped from the internet. In document processing, this is the same dynamic: every invoice, contract, and form that an AI system processes builds domain-specific extraction patterns that generic models don't have. Our Vult document intelligence platform processes documents in any language including Arabic at 99.9% accuracy precisely because of this compounding data advantage — every document processed makes the system smarter for the next one.

Deep workflow integration. Spotify's internal AI coding system, Honk, works because it sits on top of Backstage (dependency mapping since 2020) and Fleet Management (cross-repository automation since 2022). The AI tool is Claude Code. The moat is the infrastructure underneath. The same principle applies to enterprise AI deployment. An AI agent that automates a single task is a wrapper. An AI agent that integrates into procurement, compliance, customer service, and reporting workflows — understanding the dependencies between them — is infrastructure. That's the difference between a demo and a deployment, and it's the core architecture behind our Minnato platform: AI agents that go live in minutes and scale across enterprise operations, not sit beside them.

Domain-specific operational knowledge. Harvey AI isn't defensible because it uses GPT. It's defensible because it encodes legal reasoning patterns, precedent relationships, and firm-specific workflows that generic models don't have. The model is the engine. The domain knowledge is the moat. For voice AI, this means understanding not just what a customer says, but the emotional context, the cultural nuances, and the conversation history that turns a cold bot interaction into a real relationship. That domain intelligence is what Dewply is built on — reading emotions, adapting live, and turning every conversation into measurable trust.

Model-agnostic orchestration intelligence. This is the moat that Mowry's warning makes most urgent. Aggregators that simply offer access to multiple models are commoditised. But orchestration that knows which model to use for which task, when to switch, and how to govern the transitions — built on proprietary evaluation data — is genuinely defensible. This is foundational to how we architect enterprise deployments: the ability to swap models as capabilities evolve, without disrupting the business processes that depend on them.

The Pentagon Story: When the Moat Is the Boundary

While Mowry was declaring thin wrappers dead, a very different moat story was unfolding in Washington.

Defense Secretary Pete Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon on Tuesday morning for what sources describe as an ultimatum meeting. The Pentagon is threatening to designate Anthropic a “supply chain risk” — a label typically reserved for foreign adversaries — after the company refused to remove all safeguards on military use of Claude.

Anthropic holds a $200 million Defense Department contract. Claude is the only AI model currently available on the military's classified networks. The dispute escalated after the January 3 capture of Venezuelan President Nicolás Maduro, during which Claude was reportedly used through Anthropic's partnership with Palantir. A senior Anthropic executive subsequently contacted Palantir to ask about Claude's role, which Pentagon officials interpreted as disapproval.

Anthropic's position: the company will support national security use cases but insists that two areas remain off limits — mass surveillance of Americans and fully autonomous weapons systems that fire without human involvement. The Pentagon's position: these categories are too ambiguous, and any AI provider must agree to make its tools available for “all lawful purposes.”

Anthropic just closed a $30 billion funding round at a $380 billion valuation. Eight of the ten largest US companies use Claude. The company generates $14 billion in annual revenue. This isn't a small startup that can be pushed around — and the Pentagon isn't a customer that can be easily ignored.

Why This Matters for Every Enterprise Deploying AI

The Pentagon-Anthropic dispute might seem distant from enterprise AI strategy. It isn't.

Usage boundaries are now a business-critical question. Every enterprise deploying AI has implicit or explicit usage boundaries — what the AI can and can't be used for, what data it can access, what decisions it can make autonomously. The Pentagon dispute is the most extreme version of a negotiation every enterprise is having with its AI providers. Who controls the boundaries? The customer, the provider, or the regulator? If you haven't formally defined your AI usage policies, you're leaving that answer to chance.

Provider lock-in has new dimensions. Claude being the only model on classified networks illustrates what deep provider dependence looks like. The Pentagon can't easily switch — which gives Anthropic leverage but creates risk for both sides. Enterprises with single-model dependencies face the same structural vulnerability. Model-agnostic architecture isn't a nice-to-have. It's your negotiating power. When your infrastructure can swap providers without operational disruption, no single vendor controls your roadmap.

Safety positions have commercial consequences. Anthropic's safety stance is simultaneously its brand differentiator and its biggest commercial risk. For enterprises, this creates a provider evaluation dimension that didn't exist two years ago: does your AI provider's safety position align with your operational needs? An AI company that's too restrictive limits what you can build. One that's too permissive creates governance risk. The answer isn't to pick the most or least restrictive provider — it's to build architecture that doesn't depend on any single provider's policy decisions.

Regulation is arriving faster than expected. While the Pentagon dispute unfolds at the federal level, state AI legislation is advancing rapidly across the US. Virginia's AI Chatbots and Minors Act passed the Senate 39-1. Washington's chatbot bills crossed chambers. Oregon's chatbot bill passed the Senate 26-1. Florida's AI Bill of Rights passed the Appropriations Committee unanimously. California introduced seven new AI bills before its deadline. These aren't future concerns — they're laws being written right now that will affect how enterprises deploy AI in customer-facing applications, including voice AI and automated document processing.

Disposable vs. Defensible: The Enterprise AI Checklist

We build AI systems for enterprises every day. Here's the framework we use to evaluate whether an AI investment is building lasting value or burning runway.

Your AI investment is a wrapper if:

  • The core value disappears when the underlying model gets an upgrade
  • There's no proprietary data being built from usage
  • You're locked into one provider with no switching capability
  • There's no governance framework governing what the AI can and can't do
  • The “AI-powered” feature is an API call with a branded interface

Your AI investment is defensible if:

  • Every interaction generates proprietary data that makes the system smarter
  • The AI is integrated into actual business workflows, not sitting alongside them
  • You can switch models without disrupting operations
  • Usage boundaries, audit trails, and human oversight are built in from day one
  • You can point to measurable business outcomes — fewer errors, faster processing, recovered hours, improved customer satisfaction

What to Do This Week

Audit your AI stack against the moat framework. For every AI tool, platform, or initiative in your organisation, ask: is this building proprietary value, or is it a wrapper? If your AI investment disappears when the underlying model gets an upgrade, you have a wrapper — and Mowry's warning applies to you.

Start building proprietary training data now. Every customer interaction, every document processed, every operational decision that your AI system handles is an opportunity to build a dataset that makes your system more valuable. If you're not capturing and using this data, you're paying for AI without building equity.

Stress-test your provider dependencies. If your primary AI provider changed its terms, raised prices 50%, or — as the Pentagon is discovering — threatened to pull access entirely, what would happen to your operations? If the answer involves weeks or months of disruption, your architecture needs revision.

Map the regulatory landscape for your markets. State AI legislation in the US, the EU AI Act, Gulf regulatory frameworks — they're converging on consistent principles: transparency, human oversight, accountability, data sovereignty. If your AI systems aren't designed to operate within these frameworks, compliance costs will compound rapidly.

Ask your AI vendor the hard questions. What happens to your data if the contract ends? Can you export your trained models? What usage restrictions apply, and who decides when they change? The Pentagon is learning the hard way what happens when these questions aren't answered upfront. You don't have to.

The Bottom Line

The market is splitting. On one side: disposable AI — thin wrappers, generic chatbots, white-labelled models that provide no lasting advantage. Google's own startup chief is publicly telling founders this model is dying.

On the other side: defensible AI — built on proprietary data, embedded in workflows, architected to switch models, governed from day one, and delivering outcomes you can measure.

The wrapper is dead. The question for every enterprise leader is straightforward: which side is your AI investment on?