Unlocking business potential with open source AI and hybrid multicloud

Man sitting in front of three computer monitors

Authors

Stephanie Susnjara

Staff Writer

IBM Think

Ian Smalley

Staff Editor

IBM Think

Open source artificial intelligence (AI) is the buzzword of the moment, driven by the recent release of low-cost large language models (LLMs) like DeepSeek-1 and the growing prominence of open models, such as Llama and Mistral AI.

These breakthroughs are catching the public’s attention, but there’s a serious question that businesses are facing: How can we turn this buzz into tangible business value?

Open source technologies like Linux and Kubernetes have long been the backbone of critical systems, providing transparency, stability and security. Their collaborative nature fosters rapid innovation, which has been vital to the evolution of advanced AI models, including generative AI. However, the real business potential lies in combining open source AI and hybrid multicloud environments. This powerful pairing provides businesses with the following:

  • Seamless integration
  • Scalable flexibility
  • Enhanced security
  • Customized solutions
  • Innovation through collaboration

An IBM study reveals that 62% of organizations plan to increase their AI investments in 2025, with nearly half focusing on leveraging open source tools for their AI initiatives.

“We’re designing AI for multiclouds,” says Shobhit Varshney, VP & Senior Partner, AI, Data & Automation Leader at IBM. “This means we need a single layer of automation, FinOps security and governance that cuts across every cloud.”

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

Seamless integration with hybrid by design

Avoiding vendor lock-in is critical for businesses, and open source AI fits naturally within a hybrid multicloud model. It allows companies to integrate proprietary AI models, such as AWS SageMaker or Google Vertex AI, with open models like IBM’s Granite. IBM’s Granite model series, open-sourced under Apache 2.0, was designed for integration into hybrid multicloud environments, giving organizations more freedom and control over deploying and managing AI models across different platforms.

“Anything that allows you to manage the model in your infrastructure plays well with hybrid by design,” explains Varshnay, “especially when systems are built to work seamlessly across multiple cloud computing environments and on-premises IT infrastructures from the start.”

This hybrid-by-design infrastructure ensures that AI models can move freely across clouds, allowing businesses to choose the best environment for their needs.

Mixture of Experts | 5 December, episode 84

Decoding AI: Weekly News Roundup

Join our world-class panel of engineers, researchers, product leaders and more as they cut through the AI noise to bring you the latest in AI news and insights.

Flexibility at scale

Open source AI models offer unmatched flexibility, particularly when scaling and deploying models on an organization’s infrastructure. While third-party platforms like Anthropic’s Claude or OpenAI’s ChatGPT are easy to deploy, they often lock businesses into specific ecosystems with limited customization. In contrast, open source models give organizations full control, allowing them to create tailored solutions that meet unique business needs.

This flexibility is essential for businesses operating across multiple cloud environments. For example, a company using AWS for storage may want to deploy a model from Google or OpenAI. However, proprietary models often come with limitations in cross-cloud compatibility. Open source models integrate seamlessly across various platforms, enabling businesses to choose the best cloud provider without compromising consistency or compliance.

Enhanced security and data control

Self-hosting open source AI models ensures that sensitive data stays within an organization’s secure IT infrastructure, which is essential for businesses with stringent privacy requirements. By avoiding third-party servers, companies maintain control over their data and reduce risks related to external data handling, especially when dealing with international security concerns.

Another critical security advantage of open source models is the ability to control the integrity of the models themselves. Research by Anthropic has highlighted the potential risks of code manipulation or embedded vulnerabilities within AI models. Malicious actors could introduce hidden threats even with trusted providers like Meta or Google.

Open source, self-hosted models mitigate these risks by allowing organizations to inspect, validate and modify the code, ensuring greater transparency and security.

“With open source, you control your destiny,” says Varshney. “You know what’s behind the model, how it’s trained and you’re hosting it in an environment you trust.”

Customization and fine-tuning for specific needs

Beyond flexibility, the ability to fine-tune open source AI models is a game-changer. Fine-tuning allows businesses to tailor models to meet industry-specific requirements, making them even more valuable. For example, fine-tuning open source models like Llama or Granite in industries like healthcare or telecommunications helps businesses add domain-specific knowledge, improving model accuracy and performance.

Unlike proprietary models, where fine-tuning often requires sending proprietary data to the vendor’s servers, open source models allow businesses to maintain full control over their customization processes.

Varshney explains, “If one takes a small model, like a Granite model, they can add an adapter that understands my enterprise terminology. For example, healthcare terminology differs from telecom, so I can fine-tune the model to understand better and serve that unique domain.”

 

Cost-to-performance advantage

Fine-tuning open source models on in-house infrastructure provides significant performance benefits, especially when smaller models are fine-tuned with proprietary data.

“If you’re fine-tuning a smaller model with your proprietary data, it will outperform a larger, untuned model,” says Varshney. “This creates a cost-to-performance advantage, as smaller, fine-tuned models are more efficient and effective for your specific use cases.”

For instance, early proof-of-concept results of IBM Granite models show that combining a small Granite model with enterprise data achieves task-specific performance at a fraction of the cost—3 to 23 times cheaper than large frontier models—while outperforming or matching the same-sized competitors on key benchmarks.

This capability is particularly advantageous in edge computing scenarios, where smaller, fine-tuned models—like Granite—enable real-time processing on devices with limited computational power, eliminating the need for cloud infrastructure.

“One can put a small model on a remote IoT device and unlock use cases we couldn’t previously,” Varshney adds.

This approach delivers both cost savings and enhanced capabilities, especially for remote or resource-constrained environments.

Innovation through collaboration

Open source AI’s collaborative nature accelerates its pace of innovation. With contributions from a global community of developers, these models evolve quickly, staying on the cutting edge of AI development. This rapid innovation is crucial for businesses striving to maintain a competitive advantage in the AI-driven landscape.

IBM’s InstructLab project, launched in partnership with Red Hat, aims to democratize access to AI fine-tuning, making large language model customization more affordable and accessible.

Varshney notes, “The community plays a key role in strengthening these models, making them more robust.”

The path forward is open

The convergence of open source AI and hybrid multicloud is the ultimate strategy for businesses looking to maximize their AI investments. By integrating open source models within a flexible multicloud framework, companies ensure their AI solutions are scalable, adaptable and optimized across any platform, unlocking significant business value.

An IBM study shows that 51% of companies using open source AI report positive returns, underscoring this approach’s tangible, measurable impact. Embracing this combination of open source and hybrid multicloud is key to driving growth in an AI-powered future.

“Open is the future of AI,” says Varshney.

By adopting open source AI within a hybrid multicloud infrastructure, businesses can not only stay ahead of emerging trends but also deliver sustained value in an AI-driven world.

Related solutions
IBM® watsonx Orchestrate™ 

Easily design scalable AI assistants and agents, automate repetitive tasks and simplify complex processes with IBM® watsonx Orchestrate™.

Explore watsonx Orchestrate
Artificial intelligence solutions

Put AI to work in your business with IBM’s industry-leading AI expertise and portfolio of solutions at your side.

Explore AI solutions
Artificial intelligence consulting and services

IBM Consulting AI services help reimagine how businesses work with AI for transformation.

Explore AI services
Take the next step

Whether you choose to customize pre-built apps and skills or build and deploy custom agentic services using an AI studio, the IBM watsonx platform has you covered.

Explore watsonx Orchestrate Explore watsonx.ai