There's a number buried in MuleSoft's 2026 Connectivity Benchmark that should stop every enterprise leader mid-sentence.
The average enterprise now runs 12 AI agents.
That's not a projection. That's today. Across 1,050 IT leaders surveyed worldwide, the data shows that AI agents have moved from experiment to infrastructure — faster than most organisations planned for. Eighty-three percent report that most or all of their teams are actively using AI agents. IDC projects more than one billion agents globally by 2029, a forty-fold increase from 2025.
But here's the number that matters more: half of those enterprise agents operate in disconnected silos.
They can't share context. They can't coordinate workflows. They can't hand off tasks to each other. An AI agent processing a customer inquiry in service can't access what the sales agent knows about that customer's pipeline. A document processing agent can't trigger the approval workflow agent. A voice AI agent handling a phone call can't pull context from the agent managing the same customer's email thread.
The result is what the industry is calling “agent sprawl” — and it's the most expensive problem in enterprise AI right now.
The Agent Sprawl Problem
Building AI agents has become remarkably easy. Low-code platforms, agent builders, and pre-built templates mean that any team with a business problem and a vendor account can deploy an agent in days. Salesforce's Spring '26 release alone introduced Agentforce Builder, Agentic Enterprise Search, Agentforce Voice for Financial Services, and Agentic Order Routing — each deployable with minimal technical expertise.
The ease of building is the problem. When every department can spin up its own agent, they do. Marketing builds a content agent. Sales builds a prospecting agent. Support builds a ticket-routing agent. Finance builds an invoice processing agent. Legal builds a contract review agent. Each one works well in isolation. None of them know the others exist.
The MuleSoft benchmark found that the average organisation now runs 957 applications — up from 897 a year earlier — but only 27% are integrated. More than a quarter of APIs remain ungoverned. And just 54% of enterprises have a centralised governance framework for AI agents.
Andrew Comstock, SVP and GM of MuleSoft at Salesforce, was direct about the challenge: “The real challenge isn't just building an agent. It's the last mile of AI execution — where agents must be discovered, governed, and orchestrated to drive outcomes.”
This is the gap between having AI agents and having an AI-powered enterprise. The difference isn't capability — it's coordination.
What Connected Agents Actually Deliver
The contrast between siloed agents and orchestrated agents shows up immediately in enterprise results.
Reddit deployed Agentforce 360 to handle customer support workflows. The result: 46% of support cases deflected and resolution times cut by 84% — from an average of 8.9 minutes down to 1.4 minutes. That kind of improvement doesn't come from a single agent answering questions faster. It comes from agents that share context, coordinate across workflows, and hand off seamlessly between AI and human operators.
Salesforce's own internal deployment tells a similar story. The company uses Agentforce across sales, IT, and support — not as isolated tools but as coordinated systems where agents handle routine tasks across departments while employees focus on strategy and complex problem-solving. The platform now serves 12,000 customers.
Walmart uses AI agents for payroll management, merchandising decisions, and product discovery — all connected through shared data infrastructure so each agent operates with full context rather than partial information. Amazon has deployed over one million warehouse robots as part of an integrated operational system where AI agents coordinate across logistics, inventory, and fulfilment.
The pattern is consistent: the enterprises seeing measurable returns from AI agents aren't the ones with the most agents. They're the ones with the best coordination between agents.
The Architecture That Solves It
The industry is converging on a specific architectural answer to agent sprawl: an orchestration layer that sits above individual agents and coordinates their activity.
Salesforce's approach is MuleSoft Agent Fabric — what the company positions as “an operating system for the agentic enterprise.” Released in January 2026, it includes Agent Scanners that automatically discover agents across Salesforce, Amazon Bedrock, Google Vertex AI, and Microsoft Copilot Studio. These scanners normalise metadata, align agent descriptions to protocol specifications, and synchronise everything into a centralised registry.
The Spring '26 release also introduced Agentforce Script — a hybrid reasoning capability that combines deterministic workflows with flexible LLM reasoning. Required business logic always runs in sequence while AI reasoning handles nuance. This gives enterprises agents that are both precise and adaptable — critical for production environments where predictable outcomes matter as much as intelligent responses.
And Salesforce adopted MCP — the Model Context Protocol — for its Agentforce platform, enabling agents to connect to external data and tools through the same universal standard that the broader industry has converged on.
This architectural pattern — discovery, governance, and orchestration of agents across providers — is exactly what we've built at Lynt-X. Our Minnato platform operates as an orchestration layer that coordinates AI agents regardless of which model powers them, which provider hosts them, or which department deployed them. Every agent in the enterprise connects through standardised protocols, shares context where governance permits, and operates within audit trails that make every AI decision explainable.
The difference between agent sprawl and an agentic enterprise isn't more agents. It's the orchestration layer that makes them work together.
Voice AI Enters the Agentic Enterprise
One of the most significant additions in Salesforce's Spring '26 release is Agentforce Voice for Financial Services — AI agents deployed on voice channels to resolve common banking and collections inquiries at scale.
This marks a turning point for enterprise voice AI. Voice has historically been the hardest channel to automate intelligently because conversations are unpredictable, emotionally nuanced, and require real-time access to customer context across multiple systems. A voice agent that can't access the customer's account history, recent transactions, and open support tickets isn't useful — it's frustrating.
The new Agentforce Voice agents are built on the same integrated platform that connects sales, service, and data intelligence — meaning the voice agent has the same context as every other agent in the enterprise. It knows the customer's history, understands their current situation, and can escalate to a human agent with full context when the conversation requires it.
This is the architecture our Dewply platform is built for. Voice AI that operates as part of a coordinated agent ecosystem — not as an isolated channel. Every voice interaction has access to the full enterprise context: customer history, open cases, product information, pricing, and account status. When Dewply hands a conversation to a human agent, the full context transfers with it — no repetition, no lost information, no frustrated customers.
The Salesforce move validates that enterprise voice AI is no longer about building a better chatbot. It's about connecting voice to the same agentic infrastructure that powers every other enterprise workflow.
The Governance Imperative
Alcon, the medical device manufacturer, provides a case study in what happens when agent deployment outpaces governance. The company deployed 900 agents in under a year — and immediately had to establish a formal AI governance board, enforce human-in-the-loop controls for customer-facing agents, and standardise access through API-led architecture.
Alcon's solution was to “MCP-ify” its existing MuleSoft APIs so agents across Salesforce Agentforce, AWS Bedrock, and Azure could reuse them safely. Every agent connects through governed interfaces. Every action generates an audit trail. Every customer-facing decision includes human oversight.
This isn't overhead — it's the infrastructure that makes AI agents deployable in regulated industries. Financial services, healthcare, legal, government — every sector where compliance matters requires this governance layer before agents can move from pilot to production.
Our Vult document intelligence platform embeds this governance by design. When an AI agent extracts data from a contract or invoice, confidence scores determine whether the output proceeds automatically or gets flagged for human review. When multiple agents collaborate on a document workflow — extraction, validation, approval, archiving — each step generates audit records that make the entire chain traceable and defensible.
The enterprises that treat governance as a feature — not a constraint — are the ones that scale AI agents fastest. Because governance is what gives leadership the confidence to approve broader deployment.
Five Steps to Close the Orchestration Gap
Map your agent landscape. How many AI agents are running in your organisation today? Which teams deployed them? Which systems do they access? Which ones share context and which operate in isolation? Most enterprises don't have a complete inventory — and you can't orchestrate what you can't see.
Centralise agent discovery. Implement a registry where every AI agent in the enterprise is catalogued — with metadata about what it does, what data it accesses, which models it uses, and what governance controls apply. MCP-compatible infrastructure makes this significantly simpler.
Define governance before you scale. Establish human-in-the-loop rules, escalation paths, confidence thresholds, and audit requirements before deploying more agents. Retrofitting governance onto running agents is exponentially harder than building it in from the start.
Connect agents through standardised protocols. Every agent should communicate through MCP or equivalent standardised interfaces — not custom point-to-point integrations. This enables agent discovery, context sharing, and coordinated workflows without building bespoke connections for every agent pair.
Build the orchestration layer. Deploy infrastructure that coordinates agents across departments, providers, and workflows. The orchestration layer routes tasks, manages handoffs, enforces governance, and provides the visibility that leadership needs to trust AI agents with real operational responsibility.
“The enterprises winning with AI agents in 2026 aren't the ones deploying the most agents. They're the ones where every agent can see what the others know, hand off work seamlessly, and operate within governance frameworks that make every decision auditable. The orchestration layer is the difference between agent sprawl and an agentic enterprise.”
The Coordination Era
We've entered a new phase in enterprise AI. The building phase — where the challenge was creating capable AI agents — is largely solved. Low-code platforms, open-source models, and universal connectivity protocols have made agent creation accessible to every team in every department.
The coordination phase is just beginning. The challenge now is making those agents work together as a coherent system rather than a collection of isolated tools. Discovery. Governance. Context sharing. Workflow orchestration. Seamless handoffs between AI and human operators.
The enterprises that solve coordination first will operate at a fundamentally different level of efficiency, speed, and intelligence than those still building agents one at a time.
The agents are built. The question is whether they can work together.
