Two things happened yesterday that, taken together, tell you everything about where enterprise AI is headed.
In Amsterdam, Cisco stood in front of 21,000 IT professionals at Cisco Live EMEA and unveiled the biggest update to its AI infrastructure portfolio in the company's history. New networking silicon designed for massive AI clusters. A security platform built specifically to protect autonomous AI agents. An entirely new operating model — AgenticOps — for managing AI-driven IT environments at scale.
Meanwhile, Salesforce quietly cut nearly 1,000 jobs, including roles inside its own Agentforce AI team. Not because AI wasn't working — Agentforce has 18,500 customers and is growing 50% quarter over quarter — but because AI is working so well that the company needs fewer people to operate it. CEO Marc Benioff had already cut customer support from 9,000 to 5,000 employees by deploying AI-driven systems. His explanation was blunt: "I need less heads."
These aren't contradictory stories. They're the same story. AI has graduated from experiment to infrastructure — and the entire industry is reorganising around that reality.
The Infrastructure Moment
For the past two years, the enterprise AI conversation has been dominated by a single question: which model should we use? OpenAI or Anthropic? Open-source or proprietary? Every conference keynote, every vendor pitch, every board presentation centred on model capability.
That conversation is over. Not because models don't matter — they do — but because the bottleneck has moved. The constraint on enterprise AI is no longer intelligence. It's infrastructure: the networking, security, governance, and operations layer that determines whether AI actually works in production.
Cisco's Jeetu Patel, President and Chief Product Officer, put it directly at the Amsterdam conference: "AI innovation is moving faster than ever before and we're delivering the critical infrastructure our customers need to move fast and adopt AI safely and securely." The quote from Cisco's own AI Summit captures it even more sharply: "Models get headlines. Infrastructure decides winners."
The numbers confirm the shift. A widely cited study from MIT's NANDA initiative found that roughly 95% of enterprise AI pilots deliver no measurable return on investment — not because the models failed, but because the infrastructure, integration, and operational foundation weren't ready. A DDN survey of 600 IT decision-makers at large US enterprises found that two-thirds described their AI environments as "too complex to manage." The economics aren't working for most organisations because the infrastructure gap is real.
This is why Cisco's announcement matters far beyond networking hardware.
What Cisco Actually Announced — And Why It Matters
Cisco Live EMEA delivered three categories of announcements that map directly to the challenges enterprises face when moving AI from pilot to production:
1. AI Networking at Scale
The Silicon One G300 is a 102.4 terabits-per-second switching chip designed for massive AI cluster deployments. The practical impact: 33% higher network utilisation and 28% faster AI job completion times compared to non-optimised networks. The new liquid-cooled systems achieve nearly 70% energy efficiency improvement — delivering the same bandwidth in a single system that previously required six.
This matters because AI performance is now a networking problem as much as a compute problem. When inference and training workloads run across hundreds of thousands of connections, latency spikes and packet loss translate directly into wasted GPU time and slower results. Martin Lund, Cisco's EVP of hardware, told Reuters that the chip includes "shock absorber" features that re-route data around network problems automatically, within microseconds: "This happens when you have tens of thousands, hundreds of thousands of connections — it happens quite regularly."
For enterprises deploying AI agents across distributed environments, network performance directly determines agent reliability.
2. Security for the Agentic Era
This is where the announcement becomes most relevant for enterprise leaders. Cisco delivered what it called the "biggest-ever updates" to its AI Defense platform — specifically designed to protect AI agents from compromise and manipulation.
The new features include an AI Bill of Materials (AI BOM) that provides centralised visibility into AI software assets, including MCP servers and third-party dependencies. An MCP Catalog that discovers, inventories, and manages risk across MCP server registries. Advanced algorithmic red teaming that tests models and agents with adaptive, multi-turn assessments. And real-time agentic guardrails that continuously monitor agent interactions to detect manipulation — such as poisoned tools or prompts designed to trigger unauthorised actions.
Cisco also introduced AI-aware SASE (Secure Access Service Edge) that inspects the intent behind agentic AI interactions, not just the traffic. It evaluates the "why" and "how" of what agents are doing, not just whether data is flowing.
This addresses one of the most significant and least discussed risks in enterprise AI deployment. As agents take on more autonomous tasks — processing documents, executing workflows, interacting with tools and databases — the attack surface expands dramatically. An agent that can access your CRM, execute code, and query your financial systems needs security controls that are fundamentally different from a chatbot.
3. AgenticOps — A New Operating Model
Perhaps the most forward-looking announcement was AgenticOps: an agent-first IT operating model built on cross-domain telemetry from Cisco's networking, security, and observability platforms (including Splunk). The model uses AI-driven capabilities to detect issues, perform contextual analysis, and propose remediation — across campus, branch, data centre, and service provider environments.
Ron Westfall, VP at HyperFRAME Research, described it as grounding "AI-driven operations in decades of operational expertise, strong guardrails, and clear human oversight."
For enterprises, this represents the moment where AI infrastructure management itself becomes AI-driven — creating a virtuous cycle where better infrastructure enables better agents, which in turn manage the infrastructure more effectively.
What Salesforce's Restructuring Actually Signals
The Salesforce layoffs are the other side of the same coin. When a company cuts 1,000 roles — including within the team building its flagship AI product — it's not retreating from AI. It's acknowledging that AI has matured to the point where it changes workforce requirements fundamentally.
Consider the trajectory: Salesforce launched Agentforce in 2024. By February 2026, it has 18,500 customers, 9,500 of them paying, growing at 50% quarter over quarter. Benioff has called it "the core of every product we make now." The platform is succeeding. But its success means the company needs different skills and fewer manual roles across marketing, product management, and data analytics — the very functions that AI agents automate most effectively.
This pattern is about to repeat across every enterprise that successfully deploys AI. The organisations that build working AI systems will need fewer people doing the work that AI handles — and more people managing the infrastructure, governance, and strategy that keeps AI running reliably.
Cisco is building the infrastructure for that world. Salesforce is restructuring to live in it. The question for every other enterprise is whether you're preparing for both.
"The bottleneck in enterprise AI is no longer model capability — it's the infrastructure, security, and operations layer that determines whether AI actually runs in production. The companies building that layer now are the ones that will lead."
Why This Matters for Your Business
Three practical implications for enterprise leaders watching these developments:
The security conversation can't wait. If you're deploying or planning to deploy AI agents that interact with enterprise systems — CRM, ERP, financial databases, customer data — you need an agentic security strategy before you need more model capabilities. Cisco's AI BOM, MCP Catalog, and agentic guardrails represent the kind of infrastructure that needs to be in place before agents go into production, not after. The attack surface for autonomous agents is fundamentally different from traditional software, and most enterprises haven't adapted their security posture to reflect this.
Infrastructure determines ROI. The gap between the 5% of enterprises seeing returns from AI and the 95% that aren't isn't a model problem. It's an infrastructure problem — networking that can handle distributed AI workloads, observability that tracks agent behaviour, governance that maintains compliance across multi-model environments, and operations that can scale without adding proportional headcount. Invest in the orchestration layer, not just the model layer.
Workforce planning starts now. Salesforce didn't wait for AI to fully mature before restructuring. It started cutting roles the moment AI demonstrated it could handle those functions reliably. Benioff went from 9,000 support staff to 5,000 — a 44% reduction — and the company's revenue forecast went up, not down. Every enterprise deploying AI successfully will face this same workforce inflection point. The enterprises that plan for it — reskilling teams, redefining roles, building AI operations capabilities — will navigate the transition far more effectively than those caught off guard.
The Pattern Is Clear
Step back from the individual announcements and the pattern across the past week is unmistakable:
February 5: OpenAI and Anthropic launched enterprise platforms within an hour of each other. The model race is accelerating. February 6: Big Tech committed $650 billion to AI infrastructure. The physical foundation is being built. February 9: Agentic AI moved from concept to production capability. The agents are arriving. February 10: The Super Bowl showed 120 million people that AI is mainstream. Consumer expectations have shifted permanently. February 11: Cisco built the infrastructure to run agents securely at scale. Salesforce restructured around the reality that AI is working.
The conversation has moved from "should we adopt AI?" to "can we run AI as infrastructure?" The enterprises that answer yes — with the networking, security, governance, and operational foundation in place — will compound their advantage with every new model release, every new agent capability, every new use case that becomes possible.
The ones still running pilots will be wondering why their competitors moved so fast.
