In January 2026, JPMorgan Chase made a quiet decision that has aged unusually well. The bank moved AI spending out of its innovation and R&D budget categories and into core infrastructure — the same budget category that holds cybersecurity, operational resilience, and core banking platforms. CFO Jeremy Barnum confirmed the categorisation in subsequent investor communications. The 2026 technology budget reached $19.8 billion, with approximately $2 billion of that — roughly ten percent of the technology spend, or one to one-point-two percent of total revenue — directly tagged as AI. The bank reports approximately $2 billion in realised annual AI value, putting AI investment at break-even with itself before any further compounding.
At the time, the reclassification was widely covered as a JPMorgan-specific event. The bank was already considered the leading enterprise AI deployer among global banks. Its scale — over 300,000 employees, the largest technology budget in the financial industry — was treated as the explanation for a categorical move that other banks could not yet replicate.
Five months later, the structural lesson is harder to dismiss as JPMorgan-specific. Every major enterprise AI signal that has landed in 2026 has reinforced the same conclusion. AI is no longer reasonably budgeted as experimental innovation. It is core operating infrastructure, and the enterprises operating cleanly are the ones that have moved their budget category, governance regime, and accountability model to match.
This blog is for finance leaders, CIOs, and board-level executives planning the 2026 budget revision cycle and the 2027 budget build. The case for treating AI as core infrastructure is now substantially settled by external evidence; the work is in the operational implications.
What JPMorgan Actually Did
The technical move was a budget category change. AI spending that had been classified as innovation, R&D, or strategic investment was reclassified as core infrastructure. That single change carries five operational consequences inside a regulated financial institution.
The first consequence is funding stability. Innovation budgets are subject to discretionary cuts during margin pressure cycles. Core infrastructure budgets are not. Cybersecurity, operational resilience, and core banking platforms are non-negotiable line items because the regulatory consequences of underfunding them are immediate. AI joining that category means the funding curve becomes steady and predictable rather than subject to quarterly tightening.
The second consequence is governance regime. Innovation projects operate under innovation-stage governance — looser controls, faster iteration, lower documentation burden. Core infrastructure operates under operational-stage governance — risk management committees, formal change control, regulatory reporting, audit-grade documentation. Reclassifying AI applies the second governance regime, which is both more demanding and substantially more durable.
The third consequence is accountability assignment. Innovation initiatives often have ambiguous accountability — sponsored by an executive, owned by a project team, attributable to several budget owners. Core infrastructure has named owners with non-delegable accountability for performance, security, and continuity. AI inside core infrastructure has the same.
The fourth consequence is integration with adjacent infrastructure. Cybersecurity and AI overlap. Operational resilience and AI overlap. Data governance and AI overlap. When AI lives in the same budget and governance category as those systems, integration happens by default rather than as a separate project.
The fifth consequence is the visible commitment to scale. JPMorgan has deployed its internal LLM Suite to approximately 250,000 employees — the vast majority of the workforce. Over 100,000 use it daily. One in three employees begins their workday with an AI tool open. That deployment scope is incoherent with experimental innovation framing. It only makes sense as core infrastructure.
These five consequences are the substance of the reclassification. The dollar figure is downstream of them.
The Five Signals That Validated The Move In Five Months
Each of the major enterprise AI developments of 2026 so far has reinforced JPMorgan's January read.
The first signal was the hyperscaler Q1 2026 earnings, reported within minutes of each other on April 29. Alphabet's generative AI revenue grew approximately 800% year-over-year. Google Cloud backlog reached $462 billion. Microsoft's Azure cloud services grew 40% with $190 billion in 2026 capex. Amazon's AWS posted 28% growth. The combined hyperscaler 2026 capex of approximately $700 billion is the substrate buildout — but it is being funded against enterprise contracts that the same finance directors who classify AI as core infrastructure are signing. The capex is real because the demand is real. The demand is real because it is structural.
The second signal was the BCC Research global overview published on May 5, projecting enterprise AI adoption rising from 22% in 2025 to an expected 40% in 2026, with $650 billion in annual AI infrastructure investment from major technology companies. Adoption nearly doubling in twelve months is incompatible with experimental positioning. Innovation projects do not double annually. Core infrastructure does, when secular demand finds it.
The third signal was the EU AI Act trilogue process. Whether the August 2 deadline activates as written or slips to December 2, 2027 under the Digital Omnibus deferral, the regulatory framework treats AI as a system requiring conformity assessment, technical documentation, governance, audit trails, and human oversight at production scale. That is regulator language for core infrastructure. Regulators do not require conformity assessments and audit-grade documentation for experimental innovation.
The fourth signal was the talent gap data we discussed yesterday. The compliance and governance roles required to operationalise AI inside regulated enterprises are scarce because the work is core-infrastructure work — the kind of work that requires the same depth of operational discipline as cybersecurity, audit, and risk management. The organisations recognising this are staffing for it. The organisations still treating AI as innovation are staffing project teams that cannot deliver core-infrastructure outcomes.
The fifth signal landed this week. The OpenAI Deployment Company and Anthropic's enterprise joint venture together represent $11.5 billion of capital specifically structured to push frontier model relationships into private equity portfolio companies. Frontier labs do not build deployment vehicles for experimental innovation. They build deployment vehicles for predictable, recurring, infrastructure-scale revenue. The structural intent of those vehicles is to convert AI into the same kind of installed-base relationship that databases, ERP, and core banking systems already have.
Five months. Five categorically different signals. One direction.
What Changes When AI Moves To Core Infrastructure
For enterprises evaluating their own budget category for AI, six concrete operational changes follow from the reclassification, and they are worth thinking through explicitly before the next budget cycle.
The first change is multi-year capital planning. Core infrastructure has multi-year capital plans with explicit replacement and upgrade cycles. AI in this framing requires similar planning — what models will be in production in 2027, 2028, 2029; how the integration architecture evolves; what the migration path looks like when the foundation model layer changes. Innovation budgets do not require this discipline. Core infrastructure budgets do.
The second change is auditability as a baseline expectation. Core infrastructure produces audit trails because regulators require them. AI in this framing produces audit trails by design rather than as a remediation after enforcement begins. The EU AI Act, ZATCA, FTA, and emerging regimes are all written with the assumption that AI is auditable. Enterprises whose AI sits in innovation budgets often discover that their audit trails are reconstructable but not native. Enterprises whose AI sits in core infrastructure have audit trails that exist by default.
The third change is operational continuity engineering. Core infrastructure has redundancy, failover, capacity planning, and incident response tied to documented service level objectives. AI inside core infrastructure inherits the same expectations. Single-vendor dependencies become operational risks. Provider capacity throttling becomes a continuity planning issue. The fabric layer that routes work across providers becomes a resilience requirement, not an architectural preference.
The fourth change is integration with the existing risk management committee structure. Core infrastructure changes go through risk committees with explicit consideration of model risk, third-party risk, concentration risk, regulatory risk, and operational risk. AI inside core infrastructure becomes a standing topic at those committees rather than an annual update.
The fifth change is the named-owner accountability model. The CIO of an enterprise with AI in core infrastructure has named accountability for AI performance, availability, and risk in the same way they have named accountability for cybersecurity. The CRO has named accountability for AI governance in the same way they have named accountability for operational risk. This is different from the matrixed, often-unclear ownership that characterises AI in innovation framing.
The sixth change is the strategic horizon of vendor relationships. Core infrastructure relationships are long. They are renegotiated periodically rather than restructured continuously. They survive vendor changes, model upgrades, and pricing shifts because they are built into the operating model. AI vendor relationships in this framing are evaluated against the same criteria — durability, replaceability, regulatory posture, financial stability — that core systems vendors are evaluated against.
What Gulf Enterprise Leaders Should Take From This
The Gulf enterprise context for the reclassification has a regional shape worth naming.
Gulf banks, telecommunications operators, energy companies, and government-owned enterprises operate under regulatory regimes that already treat technology infrastructure with the same seriousness as the JPMorgan reclassification implies for AI. ZATCA invoicing systems, FTA filing infrastructure, sovereign cloud requirements, and central bank technology supervision frameworks have always been core infrastructure. AI joining that category locally is a smaller categorical change than it is in markets where financial-sector AI governance is still maturing.
The 39% of GCC enterprises now qualifying as AI leaders, twice the global average, did not get there by treating AI as innovation. They got there by building AI into regulated workflows where the operational discipline was already in place — and by funding it accordingly. The reclassification JPMorgan made formally in January describes the budget category these Gulf enterprises were already operating from informally.
The strategic implication for Gulf enterprises that have not yet made this shift is the same as for global peers, but with regional amplifiers. The ZATCA and FTA audit obligations already require core-infrastructure-grade documentation. Sovereign infrastructure requirements already require core-infrastructure-grade availability. Arabic-language workload performance already requires core-infrastructure-grade routing. Reclassifying AI as core infrastructure inside the Gulf simply names what the regulatory environment has been demanding for some time.
The Architectural Lesson
The architectural implication of treating AI as core infrastructure is that the orchestration layer becomes part of the enterprise's permanent operational substrate. Provider abstraction, intelligent routing, fallback resilience, unified observability, and fabric-layer governance are not features that get added to AI projects. They are the substrate that AI runs on, the same way virtualisation, networking, and security are the substrate that core banking applications run on.
Minnato, our model-agnostic AI agent infrastructure, is built explicitly as that substrate. The five operational disciplines that distinguish core infrastructure — funding stability, audit-grade governance, named accountability, multi-vendor resilience, and integration with adjacent infrastructure systems — are Minnato's architectural premise. Vult, our document intelligence product, and Dewply, our voice AI, both run on the Minnato fabric. Compliance & Invoicing extends the same architectural posture into ZATCA and FTA workflows. Enterprise Operations, anchored in our Odoo partnership, integrates the same posture into business systems where AI is increasingly embedded.
The architectural choice an enterprise makes about its orchestration layer in 2026 is the same kind of choice the bank made about its core banking platform in earlier decades — a long-horizon decision that defines the operational profile of the enterprise for years afterwards.
The Read
JPMorgan's January reclassification has now been validated by every major enterprise AI signal that has landed in 2026. The categorical move was correct, and it is generalisable. Other enterprises that continue to treat AI as innovation budget will discover, before the end of the year, that the budget category does not match the operational reality. Boards approving 2026 mid-year revisions and 2027 budget builds in the next two quarters should explicitly consider whether the AI line item belongs in innovation, in core infrastructure, or — most likely — already needs to be moved.
The reclassification is not symbolic. It changes funding stability, governance regime, accountability, integration, and architecture. The enterprises that make the change deliberately will operate cleanly through 2027. The enterprises that wait for crisis to force the change will be making it under worse conditions.
“When the world's leading enterprise AI deployer moves AI from innovation budgets to core infrastructure budgets, the move is interesting. When five months of categorically different external signals validate the same move, the move stops being interesting and becomes the structural lesson of 2026. The budget category determines the governance regime, the accountability model, the architectural posture, and the durability of every AI investment that follows. Get the category right, and everything downstream gets easier. Get it wrong, and everything downstream gets harder.”
