Back to Blog

800 Enterprise Leaders Just Said 46% Of Their AI Initiatives Are Falling Short. The Operational Foundation Is The Difference.

A new Oxford Economics survey released today found 74% of organisations are increasing AI investment while 46% report initiatives have not met expectations. The data point worth noticing is not the underperformance number. It is the pattern separating the organisations that operate AI cleanly from the ones that have not yet built the operational foundation underneath their deployments.

A new survey released this morning by Oxford Economics, in partnership with Coastal, polled 800 US business and technology leaders whose organisations all have at least one AI initiative actively in production. The headline numbers are stark in a useful way. Seventy-four percent of organisations are increasing their AI investment in 2026. Forty-six percent report their initiatives have not met expectations. Only a small minority report AI is delivering measurable business value.

The temptation when reading these numbers is to focus on the underperformance. The 46% headline maps cleanly onto a long line of similar findings — the Writer survey's 79% facing challenges, the broader 97% deployment versus 29% ROI gap, the WEF skills-gap data. Underperformance is the easy story.

The more useful story is what separates the organisations that are succeeding from the ones that are not. The Oxford Economics research is unusually clear on this point. The organisations getting results are not distinguished by which AI technology they use. They are distinguished by how they operate it. That sentence is the conclusion of the report and it deserves to be the starting point of every operational AI conversation this quarter.

This blog is for operations leaders, COOs, and CIOs whose AI portfolios are still mostly in the underperforming half — and for the boards that are paying attention to why.

The Pattern Across The 800 Organisations

Five operational practices separate the organisations seeing results from the ones still stuck.

The first practice is treating data as a continuous requirement. Underperforming organisations approach data as a one-time preparation step before deployment. Successful organisations treat data quality, data refresh, data lineage, and data governance as ongoing operational work that scales with the AI deployment, not work that completes before deployment begins. The shape of that work is operationally similar to running a payments system or a regulated reporting pipeline. It does not stop. It does not get easier. It just becomes routine.

The second practice is designing AI for how people actually work, not for how processes are documented to work. The gap between formal process documentation and actual workflow is usually large in regulated enterprises. AI deployments designed around documented process produce output that humans operating in the actual workflow cannot use without retranslation. AI deployments designed around observed workflow get adopted because the output fits the work.

The third practice is defining the problem before selecting the solution. This sounds basic. It is also the practice violated most often in 2026 enterprise AI projects. Capability-driven procurement — “we have access to this model, what can we do with it?” — produces deployments that lack a clear business outcome. Problem-driven procurement — “this workflow costs us X hours per cycle, here is what an AI deployment would have to do to reduce that” — produces deployments with measurable success criteria built in from the start.

The fourth practice is assigning clear ownership for production performance. Underperforming organisations have AI initiatives sponsored by executives and built by project teams, with no named owner accountable for performance after go-live. Successful organisations assign production AI to operational owners with the same clarity that production cybersecurity, production payments, and production financial reporting are owned. The owner has authority, has budget, and has the responsibility to keep the system performing.

The fifth practice is continuous management built in from the start. Production AI is not a system that operates correctly once deployed and continues operating correctly without attention. Model behaviour drifts. Data distributions change. User expectations evolve. Regulatory environments tighten. The organisations operating cleanly built continuous management into the original deployment design — observability, performance review cadences, incident response runbooks, model refresh cycles, governance audits. The organisations that have not are discovering this work has to be retrofitted under pressure.

These five practices are not exotic. None of them requires a new technology category, a new vendor relationship, or a new strategic framework. They are the standard operational disciplines that every other piece of production enterprise infrastructure has always required. The reason 46% of AI initiatives are falling short is that the operational discipline applied to AI has been below the operational discipline that the same organisations apply to their other production systems.

Why Operational Discipline Beats New Capabilities

Two structural reasons make operational discipline the deciding variable for 2026 AI outcomes.

The first reason is that the model capability layer has largely commoditised at the leading edge. The frontier models from Anthropic, OpenAI, Google, Meta, and the long tail of capable open-weight options are close enough on enterprise workloads that vendor choice rarely produces a 2x outcome difference. Operational discipline produces 5x outcome differences regularly. When a 2x technology decision sits next to a 5x operational decision, the operational decision is the one to focus on.

The second reason is that operational discipline compounds. An AI deployment with strong operational foundations gets better month after month — observability improvements catch quality drift, governance reviews catch policy gaps, performance management catches cost inefficiency. An AI deployment without operational foundations does not just stay still. It degrades, because the systems around it change while it does not. The compounding works both ways. Operational discipline compounds advantage. Operational neglect compounds disadvantage.

The talent gap data we covered last week reinforces both points. The compliance and operational professionals who can run AI as production infrastructure are scarce. The organisations that have built operational discipline into the deployment design rely less on those scarce roles than the organisations that have to retrofit it. Operational foundations are partially a substitute for the scarce talent — they reduce how much of that talent every deployment requires.

The Architecture That Makes Operational Discipline Scalable

The five operational practices above are observable in any successful enterprise AI deployment. They are also significantly easier to implement when the underlying architecture supports them by default rather than requiring per-deployment effort to construct.

The architectural shape that supports operational discipline at scale is the fabric layer we have described across recent posts. Provider abstraction means workloads can be moved between providers without disrupting the operational ownership model. Intelligent routing means cost and capacity become observable in unified dashboards rather than scattered across vendor consoles. Fallback and degradation means continuity engineering is structural rather than per-deployment. Unified observability means performance review can happen at the portfolio level rather than per-system. Fabric-layer governance means policy enforcement is consistent across the entire AI surface rather than implemented per application.

These five fabric-layer building blocks correspond directly to the five operational practices Oxford Economics identified. The mapping is not coincidence. Operational discipline requires architectural support. Architecture without operational discipline produces unused capability. Operational discipline without architecture produces overworked teams. The two together produce production AI that scales.

Minnato, our AI agent infrastructure, is built specifically as the architectural foundation that makes the five operational practices implementable at production scale. Vult, our document intelligence product, and Dewply, our voice AI, run on the Minnato fabric and inherit the operational properties — observability, governance, routing, audit trails — by default rather than requiring per-deployment construction. Compliance & Invoicing extends the same architecture into ZATCA and FTA regulated workflows. Enterprise Operations, anchored in our Odoo partnership, integrates the operational discipline into business systems where AI is increasingly embedded.

The relationship between architecture and operations is the structural insight that 46% of the surveyed organisations have not yet absorbed. Operations leaders who fix the architecture first find that the operational practices become natural. Operations leaders who try to fix the practices first without the architecture find that the practices do not stick.

The Gulf Operational View

Gulf enterprises operating AI at production scale are visible in regional data as a cohort that has substantially closed the operational foundation gap. The 39% of GCC enterprises now qualifying as AI leaders reached that status precisely because they built operational discipline into AI deployments structured around regulated workflows from the beginning.

The five practices look different in the Gulf context but are recognisable. Data as continuous requirement maps onto ZATCA and FTA audit-trail obligations that already require continuous data discipline. Designing for actual workflow maps onto Arabic-language and regional process realities that cannot be addressed by global default deployments. Problem-driven procurement maps onto specific compliance and revenue workflows where the cost of inefficiency is measurable in regulated currency. Clear production ownership maps onto regional governance structures that already assign accountability at the Chief Compliance Officer and Chief Risk Officer level. Continuous management maps onto sovereign-infrastructure and regional regulatory monitoring requirements that already require active operational discipline.

The Gulf operational maturity is consequently not regional advantage so much as alignment with where the global pattern is converging. The operational disciplines that produce AI outcomes in Riyadh, Abu Dhabi, Dubai, and Doha are the same disciplines producing outcomes globally — applied earlier and more consistently because the regulatory environment required them earlier.

What Operations Leaders Should Do This Week

Three concrete actions for operations leaders whose 2026 AI portfolios include any production deployments.

The first action is to score the current AI portfolio honestly against the five practices. For every production AI deployment, mark whether data is treated as continuous, whether the design fits actual workflow, whether the problem is clearly defined, whether ownership is named, and whether continuous management is built in. Deployments scoring 5-of-5 are operating in the strong-foundation cohort. Deployments scoring 2 or fewer are in the 46% Oxford Economics identified and need targeted intervention before further investment compounds the underperformance.

The second action is to identify the architectural gaps producing the practice gaps. Where operational discipline is failing, the underlying cause is almost always architectural rather than human. Teams cannot maintain governance per-application; they need fabric-layer governance. Teams cannot maintain observability across vendors; they need unified observability. Teams cannot enforce ownership without authority over the systems that affect their performance; the architecture has to give them that authority. Treating practice gaps as training problems instead of architectural problems produces consultants and frustration. Treating them as architectural problems produces durable change.

The third action is to brief leadership specifically on the architecture-discipline-outcome relationship. Boards and senior executives in 2026 are increasingly receptive to operational framing of AI because the underperformance data has reached the level where it is impossible to ignore. The Oxford Economics survey is one data point in a converging picture. The opportunity for operations leaders is to use this moment to make the case for the operational foundation investment that the strategic conversation has been ready to support but has not yet been directed toward.

The Operational Read

The Oxford Economics survey crystallises a finding that has been accumulating across multiple 2026 enterprise AI reports. Underperformance is not a technology problem. It is an operations problem. Organisations that build the operational foundation produce measurable AI outcomes. Organisations that do not, do not — regardless of how much they spend on the technology layer.

For operations leaders, the data is permission to make the operational investment case at board level with external evidence supporting it. The 800 organisations surveyed represent a cross-industry, cross-functional cohort whose findings are unusually difficult to dismiss. The five practices are clear. The architectural support that makes the practices scalable is increasingly productised. The next three quarters are when the organisations that have not yet built the foundation get the chance to build it before the underperformance gap compounds further.

The 46% number is a snapshot. The operational disciplines are the lever. The fabric layer is the architecture that lets the lever work at scale. None of these is new. The opportunity in 2026 is to act on them deliberately before the gap between operational-foundation organisations and the rest becomes structurally durable.

“The 800 organisations surveyed by Oxford Economics are not failing on technology. They are failing on operations. Treating data as continuous, designing for actual work, defining problems before solutions, assigning clear ownership, building continuous management in from the start — none of this is new. It is also exactly the operational discipline applied to every other piece of production enterprise infrastructure. The opening for 2026 is to extend that same discipline to AI, supported by architecture that makes it scalable.”