Back to Blog

97 Million Installs in 16 Months. MCP Is Now the TCP/IP of AI Agents — And Every Enterprise Needs to Understand Why.

The Model Context Protocol crossed 97 million monthly SDK downloads in March 2026 — growing 4,750% in 16 months. Every major AI provider now ships MCP-compatible tooling. The Linux Foundation governs it. 40% of enterprise applications will include AI agents by year-end. And every one of those agents needs MCP to connect to the systems where your business actually runs. MCP is not a developer tool. It is the infrastructure standard that determines whether your enterprise is visible to AI agents — or invisible.

In November 2024, a small open-source project was released with little fanfare. It defined a standard way for AI agents to connect to external tools and data sources. Sixteen months later, it has 97 million monthly SDK downloads, support from every major AI provider, governance under the Linux Foundation, and an ecosystem of over 5,800 servers covering every major enterprise application category.

The Model Context Protocol — MCP — is now the infrastructure standard for AI agents. And its implications for enterprises are as significant as the standards that defined previous technology eras.

Just as HTTP defined how browsers connect to websites, and REST defined how applications connect to APIs, MCP defines how AI agents connect to the systems where your business operates. Every AI agent that needs to query your CRM, update your ERP, file a support ticket, process a payment, or access your data warehouse will connect through MCP.

This is not a developer concern. It is an enterprise infrastructure decision that determines whether your business systems are accessible to the AI agents that will increasingly define operational efficiency, customer experience, and competitive advantage.

What MCP Actually Is

MCP is an open protocol that standardises how AI models connect to external tools, data sources, and enterprise applications. Before MCP, connecting an AI agent to a business system required custom integration work for every combination of AI model and target system. If you wanted Claude to access your Salesforce data, you built a custom connector. If you wanted GPT-5.4 to update your Jira board, you built a different custom connector. Each integration was bespoke, fragile, and expensive to maintain.

MCP eliminates this by providing a universal connection standard. An MCP server exposes a set of tools and data sources through a standardised interface. Any MCP-compatible AI agent can discover, connect to, and use those tools without custom integration work.

The architecture uses a client-server model built on JSON-RPC 2.0. AI agents (clients) communicate with enterprise tools (MCP servers) through a standardised protocol that handles capability discovery, authentication, and data exchange. The protocol supports both local connections (tools running on the same machine) and remote connections (tools running as cloud services) — making it suitable for everything from a developer's laptop to a production enterprise deployment.

The result: instead of building custom integrations between every AI agent and every enterprise system, you deploy MCP servers for your business tools once, and every AI agent can connect to them immediately.

The Adoption Curve That Changed Everything

MCP's adoption trajectory is the fastest for any infrastructure protocol in technology history.

November 2024: launch, approximately 2 million monthly SDK downloads. April 2025: OpenAI adopted MCP across its Agents SDK, pushing downloads to 22 million. July 2025: Microsoft integrated MCP into Copilot Studio, reaching 45 million. November 2025: AWS added support, hitting 68 million. December 2025: Anthropic donated MCP to the Linux Foundation's newly formed Agentic AI Foundation, co-founded with OpenAI and Block, with support from Google, Microsoft, AWS, Cloudflare, and Bloomberg. March 2026: 97 million monthly SDK downloads, with 5,800+ community and enterprise servers covering databases, CRMs, cloud providers, productivity tools, developer tools, e-commerce platforms, analytics services, and more.

For context, the React JavaScript library — one of the most widely adopted developer tools in history — took approximately three years to reach 100 million monthly downloads. MCP achieved comparable scale in 16 months.

The speed of adoption reflects a simple truth: every enterprise deploying AI agents needed a connectivity standard, and MCP won because it solved the problem cleanly, was open-source from day one, and was adopted by every major provider simultaneously rather than being locked to a single vendor.

Why This Is an Enterprise Infrastructure Decision

MCP's significance for enterprise leaders goes beyond its technical function. It determines a fundamental question: can AI agents access your business systems?

Gartner projects that 40% of enterprise applications will include task-specific AI agents by the end of 2026, up from less than 5% in 2025. Every one of those agents needs a way to connect to the tools and data sources that your business runs on. MCP is that connection layer.

If your enterprise systems expose MCP servers, AI agents can discover and connect to them immediately. Your CRM data is accessible. Your ERP workflows can be triggered. Your document management system can be queried. Your analytics platform can be accessed. Every system with an MCP server becomes part of the intelligent fabric that AI agents operate within.

If your enterprise systems do not expose MCP servers, they become invisible to AI agents. The data exists but agents cannot reach it. The workflows exist but agents cannot trigger them. Your business systems become silos — not because of organisational barriers, but because of a missing infrastructure standard.

This is the same dynamic that played out with web standards in the 2000s. Businesses that adopted HTTP and built websites became accessible to the entire internet. Businesses that did not became invisible to digital customers. MCP is the equivalent standard for AI agents — and the same accessibility dynamic applies.

The Security Question Enterprises Must Address

MCP's rapid adoption has outpaced its security hardening — and enterprises need to address this gap proactively.

Security researchers filed over 30 CVEs targeting MCP servers, clients, and infrastructure between January and February 2026. Research found command injection vulnerabilities in 43% of tested MCP implementations. The pattern is familiar from previous infrastructure standards: rapid adoption creates a large attack surface before security practices mature.

For enterprises, the security considerations are specific. AI agents connected through MCP can read data, create records, trigger workflows, and execute actions across enterprise systems. An improperly secured MCP deployment gives an AI agent — and potentially an attacker who compromises that agent — access to the same systems.

Enterprise security teams should approach MCP with the same rigour they apply to API security. Authentication must use enterprise SSO rather than static secrets. Audit trails must capture what each agent requested, what was executed, and what the outcome was. Tool-level permissions must define which operations are read-only and which allow writes. Rate limiting must prevent runaway agents from overwhelming systems. And the MCP servers themselves must be deployed within the enterprise security perimeter, not exposed to the public internet without controls.

The MCP roadmap for 2026 includes enterprise authentication improvements and a standardised registry for server discovery. But enterprises should not wait for these features to deploy MCP securely. The controls needed — network segmentation, authentication, authorisation, auditing — are standard enterprise security practices applied to a new protocol.

What the Linux Foundation Governance Means

In December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation, co-founded with OpenAI and Block. Google, Microsoft, AWS, Cloudflare, and Bloomberg joined as platinum members.

This governance structure matters for enterprise adoption decisions because it removes single-vendor risk. MCP is not Anthropic's proprietary protocol. It sits alongside Kubernetes and PyTorch in the Linux Foundation's portfolio of open infrastructure standards. Its evolution is governed by a neutral body with multi-vendor participation.

For enterprises evaluating long-term architecture decisions, this governance model provides confidence that MCP will be maintained, evolved, and supported across the industry regardless of what happens to any single vendor. The same confidence that enterprises have in HTTP, TCP/IP, and other foundation-governed standards now applies to MCP.

The Foundation also hosts two complementary projects: goose (Block's local AI agent framework) and AGENTS.md (OpenAI's coding agent guidance standard). Together, these three projects provide the standards layer for how AI agents connect to tools (MCP), how they operate locally (goose), and how they behave when writing code (AGENTS.md). The agentic AI infrastructure stack is being standardised — and MCP is its connectivity layer.

What Every Enterprise Should Do This Quarter

The MCP standard is settled. The ecosystem is mature. The adoption curve is clear. Here is what enterprise leaders should prioritise in Q2 2026.

Audit your API exposure. MCP standardises the connection between AI agents and your business systems. But if your core systems — ERP, CRM, billing, document management, analytics — do not expose clean APIs, there is nothing for MCP to connect to. Map which systems are API-ready today and which need work. That mapping is your MCP readiness backlog.

Identify your highest-value agent use cases. Not every business process needs an AI agent. Start with the processes where information is scattered across multiple systems and decisions are slow because someone has to gather data from several sources before acting. Customer onboarding, invoice processing, compliance monitoring, procurement, and incident response are common high-value starting points.

Deploy MCP servers for your core systems. The 5,800+ existing MCP servers cover most major enterprise applications. For systems that do not have existing servers, building a basic MCP server is a well-documented engineering task. The investment is modest compared to the custom integration work that MCP replaces.

Define your security posture. Decide which operations should be read-only for AI agents. Set tool-level permissions. Ensure authentication uses enterprise identity management. Establish audit trails. These decisions should be made before MCP deployment, not after.

Build MCP-native, not MCP-retrofit. If your enterprise is building new AI capabilities — agent workflows, automation systems, intelligent assistants — design them for MCP from the start. Retrofitting MCP onto systems built with proprietary integration patterns is expensive. Building MCP-native from day one is architecturally clean and future-proof.

The enterprises that make their systems MCP-accessible in 2026 will be the ones whose business operations are visible to AI agents — and who capture the efficiency, speed, and intelligence that agent-connected operations deliver. The enterprises that delay will spend 2027 explaining to their boards why their systems are invisible to the AI infrastructure that their competitors are already running on.

“97 million installs. 5,800+ servers. Every major AI provider. Linux Foundation governance. 40% of enterprise applications will include AI agents by year-end. MCP is not a developer tool — it is the infrastructure standard that determines whether AI agents can see and interact with your enterprise systems. The businesses that expose MCP servers become part of the intelligent fabric. The businesses that do not become invisible to the agents that will increasingly define how work gets done. The standard is settled. The ecosystem is ready. The question is whether your enterprise systems are accessible — or invisible.”