Back to Blog

Microsoft Just Built Its Flagship AI Product on a Competitor's Model. That Changes Everything.

Microsoft invested $13 billion in OpenAI. Then it built Copilot Cowork — its most important new AI product — on Anthropic's Claude. The platform now selects the best model for each task regardless of provider. Microsoft's VP of AI said it plainly: "Every 60 days, there's a new king of the hill." When the world's largest enterprise software company goes multi-model by design, the model-agnostic era is officially here.

This is the moment the enterprise AI architecture debate was settled — and it was settled by the most unlikely company imaginable.

Microsoft — which invested $13 billion in OpenAI and built its entire AI strategy around that partnership — just launched its most important new AI product on Anthropic's Claude.

Copilot Cowork, the centrepiece of Microsoft 365 Copilot Wave 3, is a long-running, multi-step AI agent that executes meaningful work across Word, Excel, PowerPoint, Outlook, and Teams. It analyses meetings, compiles research, generates competitive analyses, builds spreadsheets, and creates presentations — not as single responses but as sustained work that unfolds over minutes or hours.

And it runs on Claude.

Microsoft's VP Jared Spataro did not hedge: “Every 60 days at least, there's a new king of the hill. There's so much demand for a platform that doesn't feel like, ‘I have to skip over to the next vendor.’”

When the world's largest enterprise software company — with 15 million paid Copilot seats and a $13 billion investment in its primary AI partner — deliberately builds its flagship feature on a different provider's technology, the architecture question is answered. Multi-model is not a preference. It is the enterprise standard.

What Microsoft Actually Built

Copilot Cowork is not a chatbot enhancement. It is a fundamentally new category of enterprise AI product.

Working closely with Anthropic, Microsoft integrated the same agentic technology that powers Claude Cowork into the Microsoft 365 environment. The system can handle tasks that run for minutes or hours, coordinating actions across multiple applications and producing real outputs — documents, spreadsheets, presentations, research compilations — without requiring continuous human input.

In a demo, Microsoft executive Charles Lamanna showed Cowork analysing a month of meetings with direct reports, compiling customer notes from a business trip, and generating a competitive analysis with an accompanying Word document and Excel spreadsheet. This is not prompting an AI to write a paragraph. This is delegating work.

Critically, Copilot Cowork runs in the cloud within a customer's Microsoft 365 tenant — not locally on a user's device. This means it is covered by Microsoft's enterprise data protection, integrated with what Microsoft calls “Work IQ” (intelligence drawn from a user's emails, files, documents, meetings, and chats), and fully auditable by IT and security teams.

Spataro framed this as a deliberate design choice: “We actually don't work locally, and that's a feature, not a bug.” Enterprise governance, audit trails, access controls, and admin oversight are built into the architecture from the start.

The Multi-Model Architecture That Matters

The deeper significance of Copilot Cowork is not the product itself — it is the architectural principle it embodies.

Microsoft 365 Copilot now hosts models from multiple providers. Claude is available across the full Copilot Chat experience — not just in specialised features but in everyday conversations that 90% of Fortune 500 companies already use. In Copilot Studio, when building custom agents, administrators can select Claude Sonnet 4.5 or Claude Opus 4.1 as the primary model instead of GPT-4o. And Copilot automatically selects the right model for each task based on what it needs to accomplish.

This is not a backup system. This is multi-model by design. Microsoft described it explicitly: “Your work is not limited by one brand of models. Copilot hosts the best innovation from across the industry and chooses the right model for the job regardless of who built it.”

For enterprise AI architecture, this is the most important statement any major technology company has made in 2026. The company that invested more in a single AI provider than any other company in history is now saying — publicly, in its product architecture — that no single model is best for every task.

This validates the architectural principle we have built Lynt-X around from the beginning. Our Minnato orchestration platform does not commit to a single model. It selects the best model for each specific task based on performance, cost, and requirements. When Microsoft — with every financial incentive to lock customers into OpenAI — instead builds a multi-model platform, the enterprise debate is over.

Agent 365: The Orchestration Layer for Enterprise AI

Alongside Copilot Cowork, Microsoft announced Agent 365 — an orchestration platform for AI agents that allows IT and security teams to monitor, govern, and secure agents across an organisation, including agents created using other vendors' software.

This is directly analogous to what our Minnato platform does: provide a governance and orchestration layer that sits above individual AI agents and coordinates their activity with enterprise-grade security, audit trails, and access controls.

Agent 365 will be generally available from May 1 at $15 per user per month, or included in the new Microsoft 365 E7 tier at $99 per user per month. The E7 tier bundles Copilot, Agent 365, and the Microsoft Entra Suite — positioning orchestration, governance, and identity management as a single enterprise package.

IDC projects agent use will increase by an order of magnitude over the next few years, with hundreds of millions and soon billions of agents operating across enterprises. At that scale, the orchestration layer — the platform that discovers, governs, and coordinates agents — becomes the most valuable piece of enterprise AI infrastructure.

Microsoft has already seen more than 500,000 agents created inside its ecosystem. The question is no longer whether enterprises will run AI agents. It is who orchestrates them — and Agent 365 is Microsoft's bid for that role.

What the $285 Billion Market Reaction Tells You

When Anthropic launched Claude Cowork as a standalone product in January 2026, enterprise software stocks shed a combined $285 billion in value. Investors repriced companies whose core functionality overlapped with what Anthropic's desktop AI could automate. Microsoft's own shares fell more than 14%.

Microsoft's response was not to fight the agentic AI wave. It was to ride it — by integrating the very technology that spooked its investors into its own platform, wrapped in enterprise governance and grounded in Microsoft 365 data.

This is the enterprise playbook for the agentic era: do not compete with AI agent platforms. Integrate them. Govern them. Orchestrate them. Make them work within your enterprise security and data framework. The value is not in the agent itself — it is in the infrastructure that makes the agent trustworthy, observable, and controllable.

Our Vult document intelligence platform applies the same principle. When a new AI model delivers better document extraction — whether from OpenAI, Anthropic, or an open-source provider — Vult integrates it through the orchestration layer. The document processing workflow does not change. The governance framework does not change. The model underneath improves, and every document processed benefits automatically.

Our Dewply voice AI platform takes the same approach. When a new model delivers better voice understanding, better sentiment analysis, or better multilingual capability, Dewply routes voice interactions to the improved model — within the same governance framework, the same audit trails, the same compliance controls. The customer experience improves. The enterprise governance remains consistent.

Why This Matters for Gulf Enterprises

For enterprises in the Gulf and broader MENA region, Microsoft's multi-model shift carries specific implications.

Microsoft 365 is the dominant enterprise productivity platform across the region. When Microsoft makes Claude available alongside GPT inside Copilot — and lets administrators choose which model powers which workflow — every Gulf enterprise using Microsoft 365 gains immediate access to multi-model AI capabilities without changing platforms.

The Work IQ intelligence layer that grounds Copilot in enterprise data — emails, files, meetings, chats — means AI agents operating in Microsoft 365 already understand your organisational context. Combined with multi-model selection, this means Gulf enterprises can route Arabic-language document processing to models with the strongest multilingual capabilities, complex reasoning tasks to frontier models, and routine operations to cost-efficient models — all within the same platform.

The enterprise governance framework — running in the customer's Microsoft 365 tenant with full audit trails and admin controls — addresses the compliance and data sovereignty requirements that Gulf enterprises operating under local regulatory frameworks need. AI agents operate within established security boundaries. Actions are transparent. Outputs are auditable.

The Pattern Is Now Universal

Look at the pattern across March 2026. Apple built its AI strategy on Google's Gemini rather than its own models. Nvidia made NemoClaw hardware-agnostic rather than Nvidia-exclusive. Google embedded Gemini into Workspace while supporting MCP for external agent connectivity. And now Microsoft built its flagship AI product on a competitor's model.

Every major technology company has independently reached the same conclusion: no single AI model is best for every task, and the platform that selects the right model for each job delivers the most value to enterprises.

This is not a trend. It is a settled architectural principle. And the enterprises that design their AI operations around it — with orchestration layers that select the best model for each task, governance frameworks that apply consistently regardless of which model runs underneath, and deployment flexibility that routes between cloud and local infrastructure based on requirements — will capture more value from AI than those still locked to a single provider.

“Microsoft invested $13 billion in OpenAI — and then built its most important new AI product on Anthropic's Claude. Apple partnered with Google's Gemini. Nvidia made NemoClaw work on any hardware. The model-agnostic era is not coming. It arrived. The enterprises that designed for it are already capturing value from every model improvement, from every provider, automatically.”

What to Do This Week

Evaluate Copilot Cowork for your organisation. If you are on Microsoft 365, Copilot Cowork brings agentic AI capabilities — long-running, multi-step tasks across Word, Excel, PowerPoint, and Teams — with enterprise governance built in. Broader access rolls out through the Frontier programme this month.

Activate multi-model capabilities. If your Microsoft 365 admin has not enabled Claude alongside GPT in Copilot, you are missing capability. Different models excel at different tasks. Enabling multi-model selection gives your team the best AI for each job.

Plan for Agent 365. The orchestration layer for enterprise AI agents launches May 1. Evaluate how Agent 365 fits alongside your existing AI operations — and how it integrates with orchestration platforms like Minnato that coordinate AI across systems beyond Microsoft 365.

Audit your single-provider dependencies. If any part of your AI operations is locked to a single model provider, March 2026 has given you every signal you need: the world's largest technology companies have all gone multi-model. Your enterprise should too.

The Debate Is Over

For three years, the enterprise AI world debated whether to commit to a single AI provider or build for model flexibility. The arguments for single-provider simplicity were real: fewer integrations, simpler governance, consistent performance.

But March 2026 ended that debate. Apple, Microsoft, Nvidia, and Google — the four most valuable technology companies on Earth — all independently chose multi-model architecture. Not because they could not build or buy a single model. Because no single model is best for every task. Because the provider landscape changes every 60 days. Because enterprises need the flexibility to capture value from every improvement, from every provider, automatically.

The model-agnostic era is not a prediction. It is the present. And the orchestration layer that coordinates models, governs agents, and routes each task to the best available option is the enterprise AI infrastructure that will matter most for the next decade.

Microsoft just proved it — by building its flagship product on a competitor's model.