Back to Blog

The US Treasury Just Said the Quiet Part Out Loud: Not Adopting AI Is Now a Financial Risk.

The US Treasury Department launched its AI Innovation Series last week — four roundtables convening banks, regulators, and technology firms to accelerate AI adoption in financial services. Treasury Secretary Bessent framed it explicitly: "failure to adopt productivity-enhancing technology is its own risk." The US government is no longer asking whether financial institutions should use AI. It is treating the failure to adopt AI as a threat to financial stability. That reframes the enterprise AI conversation globally.

For three years, the enterprise AI conversation has been framed around opportunity. AI can improve your operations. AI can reduce your costs. AI can give you a competitive advantage.

Last week, the US Treasury Department reframed the conversation entirely. It is no longer about what AI can do for you. It is about what happens to you if you do not adopt it.

On March 23, the Financial Stability Oversight Council and the Treasury Department's Artificial Intelligence Transformation Office launched the AI Innovation Series — a public-private initiative that brings together financial institutions, technology firms, regulators, and specialised experts across four roundtables. The stated goal: explore the highest-value AI use cases and identify practical approaches to scaling AI adoption while preserving safety and soundness.

The language from Treasury Secretary Scott Bessent was unmistakable: “We are optimizing regulation to support growth for both Main Street and Wall Street: moving from a posture focused on constraint toward one that recognizes failure to adopt productivity-enhancing technology as its own risk.”

Read that again. The US Treasury Secretary — the person responsible for the stability of the world's largest financial system — just said that not adopting AI is itself a risk. Not a missed opportunity. A risk. A risk to financial stability.

Deputy Assistant Secretary Christina Skinner reinforced the point: “AI adoption is not merely a question of technological modernization — it is critical to America's financial stability and a precondition to economic growth. When institutions cannot deploy tools that improve fraud detection, credit allocation, and operational resilience, the system becomes less efficient and less secure.”

This is the most consequential policy shift in enterprise AI since the EU AI Act. And its implications extend far beyond the United States.

From Constraint to Enablement: The Regulatory Pivot

To understand why this matters, you need to understand what changed.

For the past three years, the global regulatory posture toward AI has been predominantly cautious. The EU AI Act, which came into full enforcement in January 2026, established strict transparency, safety, and accountability standards for AI systems. Regulatory discussions worldwide centred on risk, bias, accountability, and harm prevention. The implicit message to financial institutions was: proceed carefully, or better yet, wait until the rules are clear.

The Treasury's AI Innovation Series represents a 180-degree pivot. The US government is now saying that the risk of waiting is greater than the risk of adopting. Institutions that fail to deploy AI for fraud detection, credit underwriting, cybersecurity, and operational resilience are not being cautious — they are becoming less secure and less efficient.

This is not a theoretical position. The Treasury released two practical resources alongside the initiative: an AI Lexicon that standardises terminology across the financial sector, and the Financial Services AI Risk Management Framework that provides specific governance guidance for deploying AI in regulated environments.

Treasury's Chief AI Officer Paras Malik framed the operational reality: “AI is moving from experimentation to enterprise-wide integration, and disciplined implementation will determine its impact. Through the Innovation Series, we are convening regulators and industry leaders to ensure governance frameworks evolve alongside deployment and remain fit for purpose as AI becomes embedded across financial markets.”

The keyword is “embedded.” Not explored. Not piloted. Embedded. The US government's position is that AI is already embedded in financial services functions and the governance frameworks need to catch up to that reality — not slow it down.

Why Financial Services Leads Enterprise AI Adoption

Financial services is not an arbitrary starting point for this policy shift. It is the sector where AI adoption is most advanced, most measurable, and most consequential.

Nvidia's 2026 State of AI survey — covering 3,200 respondents across five industries — found that financial services showed the strongest adoption and ROI results alongside retail and healthcare. Among financial services professionals specifically, 89% reported that AI is helping increase annual revenue while lowering operating costs. The top use cases are fraud detection, risk management, document processing, and customer service automation — precisely the functions the Treasury identified as critical to financial stability.

Eighty-four percent of financial services respondents said open-source models and tools are important to their AI strategy, as firms increasingly fine-tune models on proprietary data to gain capabilities that competitors cannot replicate.

The Financial Stability Oversight Council's quarterly review on March 25 explicitly addressed “the implications of increased investment in artificial intelligence” as part of its assessment of risks to the financial system. AI investment is now a standing agenda item for the body responsible for monitoring systemic risk to the US financial system.

This elevation — from technology procurement decision to systemic financial stability consideration — changes the conversation at every level of the enterprise. AI adoption is no longer a decision made by the technology team and approved by the CFO. It is a governance question with regulatory implications. Boards of directors, risk committees, and compliance officers are now stakeholders in AI adoption decisions because the failure to adopt has been explicitly framed as a risk by the institution's primary regulator.

The Risk Management Framework: What It Actually Says

The Financial Services AI Risk Management Framework released by Treasury provides the governance architecture that institutions need to deploy AI responsibly. It is aligned with NIST standards but tailored specifically for financial services.

The framework addresses the areas that have historically slowed AI adoption in regulated industries: explainability (how do you explain AI-driven decisions to regulators and customers), data practices (how do you manage the data that AI systems process), identity and fraud (how do you use AI for fraud detection without introducing new vulnerabilities), and accountability (who is responsible when an AI system makes an error).

By providing standardised guidance on these questions, the Treasury is removing the regulatory uncertainty that has been the single biggest barrier to enterprise AI adoption in financial services. Institutions that previously hesitated because they were unsure how regulators would evaluate their AI deployments now have a published framework to follow. The guidance does not eliminate risk — but it makes the risk manageable and the governance approach defensible.

The AI Lexicon is equally significant in practice. One of the persistent challenges in enterprise AI adoption is that different stakeholders use the same terms to mean different things. “Model,” “bias,” “explainability,” “governance,” “risk” — each of these words carries different connotations for engineers, risk officers, compliance teams, and regulators. The standardised lexicon creates a common language that enables productive conversation between technical and regulatory stakeholders.

For enterprises outside the United States, these resources matter because they establish a reference framework. Financial institutions operating globally — and financial services is inherently global — will adopt governance practices consistent with the US framework even when operating in other jurisdictions, simply because the US financial system's standards influence practices worldwide.

What This Means for Gulf Financial Institutions

The Treasury's position carries direct implications for financial institutions operating in the Gulf and broader MENA region.

Gulf financial institutions operate at the intersection of multiple regulatory frameworks. UAE Central Bank regulations, DIFC and ADGM financial services authority requirements, Saudi Arabian Monetary Authority directives, and Bahrain Central Bank guidelines all influence AI adoption decisions. When the US Treasury — the most influential financial regulator in the world — explicitly frames AI adoption as a financial stability requirement, that signal influences regulatory thinking globally.

Several specific implications emerge for Gulf enterprises.

First, the governance framework. The Treasury's Financial Services AI Risk Management Framework provides a template that Gulf financial institutions can adapt to their regulatory context. Rather than building governance frameworks from scratch, institutions can reference the US framework and tailor it for local requirements — accelerating the governance work that typically precedes large-scale AI deployment.

Second, the competitive dynamic. Gulf financial centres — DIFC, ADGM, DIFC in Riyadh — compete globally for financial services firms. When the US government frames AI adoption as critical to financial competitiveness, Gulf regulators face pressure to ensure their own frameworks enable rather than constrain AI adoption. Regulatory environments that slow AI adoption risk losing competitiveness against jurisdictions that facilitate it.

Third, the board-level conversation. When a government frames AI adoption as a stability requirement, the conversation moves from technology committees to the boardroom. Board members and risk committees at Gulf financial institutions will increasingly face questions about their institutions' AI adoption posture — not as a technology question, but as a governance and risk question. The Treasury's position gives internal AI champions the language and the regulatory backing to elevate AI adoption to a strategic priority.

Fourth, the operational reality. Gulf financial institutions process enormous volumes of cross-border transactions in multiple currencies and languages. Fraud detection, compliance monitoring, anti-money laundering, and customer due diligence at this scale and complexity are precisely the functions where AI delivers the most measurable improvements — and where the Treasury says failure to adopt creates systemic risk.

The Global Pattern: Regulation Enabling Rather Than Constraining

The Treasury's AI Innovation Series fits a broader pattern emerging in 2026.

The EU AI Act established comprehensive safety and accountability standards — but also created a clear framework within which enterprises can confidently deploy AI, knowing they are compliant. The UK's approach has emphasised sector-specific regulation rather than horizontal AI law, allowing financial services regulators to develop AI guidance tailored to their domain. India's Digital Personal Data Protection Act covers AI systems that process personal data at scale while the government simultaneously invests $1.2 billion in national AI infrastructure.

The pattern across all of these approaches is a shift from “regulate to constrain” to “regulate to enable.” Governments are recognising that clear, practical governance frameworks accelerate AI adoption rather than slowing it — because they remove the regulatory uncertainty that causes enterprises to delay.

The Treasury's position is the most explicit version of this shift. By framing non-adoption as a risk, it goes beyond enabling AI adoption. It creates regulatory pressure to adopt. Financial institutions that choose not to deploy AI for fraud detection and operational resilience will need to explain that choice to regulators who have explicitly stated that such tools improve financial stability.

For enterprise leaders across every industry — not just financial services — the Treasury's position signals where regulation is heading. If the most conservative regulatory environment in the world (US financial services) now frames AI adoption as a stability requirement, other sectors and jurisdictions will follow. The question for every enterprise is not whether this regulatory pressure will reach their industry, but when.

What This Means for Enterprise AI Decision-Making

The Treasury's reframing changes the enterprise AI decision-making calculus in three specific ways.

First, the burden of proof shifts. Previously, teams proposing AI adoption needed to prove that AI was safe enough to deploy. Now, the burden includes proving that non-adoption is safe enough to justify. When a regulator says that failure to adopt productivity-enhancing technology is itself a risk, the business case for AI includes the risk of not deploying it.

Second, governance becomes an enabler rather than a blocker. The Treasury's AI Risk Management Framework and AI Lexicon provide the governance tools that compliance and risk teams need to approve AI deployments. Rather than saying “we don't have a framework for this,” institutions can now say “here is the framework — let's deploy within it.” Governance accelerates adoption rather than blocking it.

Third, the timeline compresses. When a regulator convenes roundtables to identify “practical approaches to scaling innovation,” the implication is that scaling should be happening now — not after another year of pilot programmes and proof-of-concept projects. The Innovation Series is designed to produce actionable guidance for deployment at scale, not to explore whether AI is ready. The government's position is that AI is ready and the governance frameworks need to support deployment.

For enterprises still in the assessment phase — and Nvidia's survey showed 28% of enterprises are still assessing — the Treasury's position should accelerate the transition to deployment. The regulatory environment is no longer a reason to wait. It is becoming a reason to move.

“The US Treasury Secretary just said that ‘failure to adopt productivity-enhancing technology is its own risk.’ That single sentence changes the enterprise AI conversation globally. AI adoption is no longer a technology decision — it is a governance, risk, and regulatory question. The burden of proof has shifted. The question is no longer ‘is AI safe enough to deploy?’ It is ‘is non-adoption safe enough to justify?’ For enterprises in the Gulf and globally, the signal is clear: the most influential financial regulator in the world just treated AI adoption as a financial stability requirement. Every other jurisdiction will follow.”

What to Watch

The four roundtable outcomes. The AI Innovation Series will produce specific guidance on high-value AI use cases and governance approaches for financial services. The recommendations from these roundtables will shape regulatory expectations across the sector.

Other regulators' responses. Watch how Gulf financial regulators — UAE Central Bank, SAMA, DIFC, ADGM — respond to the US Treasury's position. Regulatory alignment with the US approach would accelerate AI adoption frameworks across the Gulf.

Board-level AI governance. Financial institutions whose boards have not yet established AI governance frameworks will face increasing pressure to do so. The Treasury's position makes AI adoption a board-level risk decision, not a technology procurement decision.

The EU-US regulatory dynamic. The EU AI Act and the US Treasury's enablement approach represent different but potentially complementary regulatory philosophies. How these frameworks interact will shape the governance landscape for enterprises operating across both jurisdictions.

The US government just told the financial services industry that the risk of waiting is greater than the risk of adopting AI. That changes everything — not just for financial institutions, but for every enterprise evaluating whether to accelerate or delay AI deployment. The regulatory wind is now at AI adoption's back. The enterprises that move with it will define their industries for the next decade.