AI infrastructure that endures

Three essentials to engineer for speed, scale, and trust
Decorative image: Abstract graphic representing an integrated AI infrastructure
Three essentials to engineer for speed, scale, and trust

As AI dominates boardroom agendas, business results remain limited. Nearly 40% of AI projects initiated in the last two years did not advance beyond the pilot phase, according to our previous research.  While not all pilot projects warrant scaling to production, a more fundamental challenge persists: a fragmented technology foundation that can't support the needs of production AI use cases.

A proper AI foundation should function like an engine where every component—compute, data, governance—works in sync. However, when organizations attempt to scale AI use cases to live deployments, the weakest links become apparent: fragmented data that can't fuel AI models, inconsistent governance that can't manage risk, and infrastructure platforms that can't handle real-world workloads. As AI advances toward agentic systems requiring independent decision-making in distributed environments, the need for specialized infrastructure and coordination becomes an imperative.

Our analysis of data from a survey of 1,200+ global C-suite executives shows that organizations with higher operational maturity and infrastructure readiness share specific foundational capabilities that enable them to advance value-generating AI use cases: an adaptable infrastructure, robust governance, and strategic capability development through people and partnerships. Such capabilities become the blueprint for building an intentionally designed and seamlessly integrated infrastructure that delivers results.
 

What AI infrastructure approach provides adaptability to optimize cost and performance?

A hybrid approach—combining on-premises systems with private and public cloud resources—gives organizations the agility to adapt specialized compute resources, advanced storage systems, and high-performance networking in support of each AI workload’s unique needs—from model training to inferencing. 70% of executives say a workload-optimized hybrid strategy has enabled their organization to optimize costs and performance. But only 8% of executives say their current infrastructure meets all their AI needs. Looking ahead, just 42% expect their current infrastructure to handle the data volumes and compute demands of advanced AI models, and 46% expect it to handle real-time inferencing at scale.

“… we’ve architected our systems to easily adapt and change in case another player is better or certain AI models become too expensive.”

Hauke Stars
Chief Data Officer at Volkswagen AG

 

How can organizations scale AI responsibly? 

Our research suggests that leading organizations embed governance into their AI infrastructure from the beginning—what we call "trust-by-design." When governance is engineered into the infrastructure foundation, it shifts from being an add-on to being a catalyst that helps enable faster deployment and greater resilience while allowing organizations to maintain control over data integrity, model accuracy, and ethical standards.

83% of executives agree effective AI governance is essential, ranking ethical considerations, privacy and data security, and transparency and explainability as most important. Yet just 8% say they have embedded frameworks to manage AI-related risks. Moreover, data privacy, security, and compliance are cited among the top reasons that AI infrastructure investments have failed to deliver as expected.

 

How do we choose the right AI infrastructure partners?

Our analysis shows external partnerships are essential to infrastructure readiness for AI, but we found most organizations are selecting partners based on visionary priorities instead of tactical capabilities—which may explain disappointing results. Executives report that vendor lapses are the top reason AI infrastructure efforts fall short. Partners must be chosen strategically, ensuring they bring both an understanding of an organization’s business context as well as technical capabilities.
 

Organizations choose partners based on strategy but that often leads to execution disappointment.

Graphic showing organizations select partners based on three visionary priorities more so than three pragmatic priorities.

 
Developing in-house talent is also critical. While 87% of organizations are investing in training and recruiting AI talent, nearly two-thirds admit they are still in the early phases of AI workforce maturity. Navigating emerging tech is new territory, and building a community of experts with specialized skills can help minimize risks. For example, our analysis confirms Centers of Excellence (CoEs) accelerate maturity and AI infrastructure readiness, yet just over one-third (38%) of organizations have established an AI CoE.

Download the report to explore more insights across these three critical areas. An action guide outlines key steps you can take to make your infrastructure a springboard for your AI ambitions.

 

 


Bookmark this report



Meet the authors

Ric Lewis

Connect with author:


, Senior Vice President, IBM Infrastructure


Hari Kannan

Connect with author:


, Vice President, Strategy and Business Development, IBM Infrastructure


Hillery Hunter

Connect with author:


, General Manager, IBM Power; CTO, IBM Infrastructure, IBM Fellow


Robert Zabel

Connect with author:


, Associate Partner, Research Lead, IBM Institute for Business Value


Jana Chan

Connect with author:


, Managing Research Consultant, IBM Institute for Business Value

Originally published 06 October 2025