December 18, 2023 By Ben Ball 4 min read

In the most recent version of its “Cloud Hype Cycle,” Gartner placed multi-cloud network operations in the “peak of inflated expectations, perilously close to the ‘trough of disappointment.'” While this may reflect the state of both hybrid and multicloud networking at an aggregate level, there is an abundance of nuance lurking underneath Gartner’s assessment.  

The challenge is that hybrid and multicloud are both the present and the future of networking. It is an area that seems to be producing inflated expectations, deep disappointment, enlightened revelations and amazing productivity all at the same time. Let us dig a little deeper into what this means. 

The present: Applications in multiple siloes 

Most applications, services and content streams are already delivered from multiple clouds and hybrid environments. We know this because IBM® NS1® is the authoritative DNS provider for many of those assets. Our internal datasets show that 80% of our customers house workloads in more than one cloud provider.  

Nevertheless, having multiple clouds available is not the same as spreading delivery of applications, services and content across multiple clouds.  

The application traffic flowing through IBM NS1 Connect’s infrastructure shows that clouds are almost always used in a siloed way. Multiple clouds often deliver single-use applications in parallel, but the use case of synchronized application delivery from multiple clouds at the same time is rare.  

As an example, we found one large enterprise customer that uses nine separate public clouds—from the “big three” of AWS, Azure and GCP all the way down to smaller providers like Tencent and DigitalOcean. We found zero overlap between cloud providers in the DNS records associated with those clouds. Every record was pointed at a single cloud only. 

The future: Distributed, multi-infrastructure applications 

Application workloads are already leveraging the advantages of single cloud or hybrid environments—the next step will be to leverage the advantages of multiple environments (including cloud, edge and on-prem infrastructure) at the same time. Ideally, this will happen through an abstracted management layer that uses policies to implement connectivity decisions across environments. The benefits of a distributed approach to application connectivity are clear: 


The unique value of cloud-specific microservices can be applied to specific parts of an application workload without needing to house the entire application in one place—a “best of breed” approach that enhances capabilities without resulting in vendor lock-in. 


The latency of cloud services can vary from moment to moment and depends on a host of factors, including geography, routing pathways and service availability. A multi-cloud approach allows businesses to select the best-performing environment for their applications at any given time. 


Application delivery costs vary significantly between cloud providers. In particular, baseline usage commitments can play a strong role in determining the overall expense of bringing an application to the end-user’s screen. A multi-cloud approach delivers the least expensive option for any particular moment, lowering the overall cost of delivery. 


When downtime or service deprecations come (and they will come), network admins need the flexibility to steer application traffic to alternative workloads. A true multi-cloud delivery framework ensures that applications keep running even when single clouds are unavailable. 

The value of traffic steering in application delivery 

NS1 has long recognized the value of delivering applications, services and content from multiple back-end infrastructure vendors. We have been helping customers handle multiple infrastructure layers for years. These customers have used our innovative traffic steering capabilities to improve the business metrics they care about.  

Leveraging highly granular Real User Monitoring (RUM) data, we have helped these customers improve performance by steering traffic to the delivery CDNs and clouds which provide the lowest latency at the best cost. Using data from NS1’s availability monitors, we have steered traffic around outages and service deprecations to keep applications (and revenue) up and running. 

Now that NS1 is part of IBM, we are starting to extend the proven value of DNS traffic steering deeper into the networking stack. NS1’s traffic steering finds the best connection between clouds and end users. IBM Hybrid Cloud Mesh optimizes internal connectivity between cloud-based application workloads. Together, the two solutions deliver applications that are optimized for performance, cost and availability at every connection point. 

DNS that does more: Optimizing application delivery end-to-end 

The market for multi-cloud network operations software is rapidly evolving. A horde of startups is already converging on what promises to be a large, industry-defining set of customer challenges. Yet, if you look at the scope of the challenge these startups have chosen to address, it is surprisingly narrow. 

Almost all of the emerging multi-cloud networking solutions are focused on optimizing connections between clouds for internal-facing use cases. Extending that optimized connection to the end user is almost an afterthought, much less any network outside of these vendor ecosystems. IBM is taking a more holistic view. By attaching NS1’s authoritative DNS network to its hybrid cloud connectivity solution, IBM is delivering connectivity that is optimized end-to-end, regardless of the customer use-case. 

The value of faster connections, lower network delivery costs, better network resilience and higher availability is not merely confined to what is inside the firewall. It extends out to the end user experience through applications that are better performing, always on and cheaper to provide. That, in turn, means applications that have higher customer satisfaction, better customer retention and higher per-customer revenue. 

We believe there is a ton of customer value to be unlocked by optimizing connectivity at every point in the hybrid cloud application delivery chain—from internal networks all the way out to end users. Stay tuned as we start to bring this vision to life. 

Learn more about IBM NS1 Connect® Managed DNS
Was this article helpful?

More from Hybrid cloud

Decoding the future: unravelling the intricacies of Hybrid Cloud Mesh versus service mesh 

4 min read - Hybrid Cloud Mesh, which is generally available now, is revolutionizing application connectivity across hybrid multicloud environments. Let’s draw a comparison between Hybrid Cloud Mesh and a typical service mesh to better understand the nuances of these essential components in the realm of modern enterprise connectivity. This comparison deserves merit because both the solutions are focused on application-centric connectivity albeit in a different manner.   Before we delve into the comparison, let's briefly revisit the concept of Hybrid Cloud Mesh and…

Confidential Containers with Red Hat OpenShift Container Platform and IBM® Secure Execution for Linux

7 min read - Hybrid cloud has become the dominant approach for enterprise cloud strategies, but it comes with complexity and concerns over integration, security and skills. To address these concerns the industry is embracing container runtime environments to abstract away infrastructure. Red Hat OpenShift Container Platform (RH OCP) has emerged as a leading solution for supporting the application development lifecycle, provisioning and managing container images and workloads into a platform for containerized applications and ecosystem. RH OCP provides a common deployment, control and…

The battle of the personas in the world of enterprise applications—how IBM Hybrid Cloud Mesh and Red Hat Services Interconnect unify them

3 min read - Navigating the intricate realm of modern enterprise applications requires collaboration among diverse personas to effectively manage complexity. This helps with continuous business support through applications automating essential workflows.   Traditionally, applications and their hosting infrastructure align with DevOps and CloudOps. However, rising costs due to diverse IT environments led to the emergence of FinOps, focusing on expense monitoring and control.   Various personas in application deployment have introduced highly complex workflows. Typically, DevOps initiates requests, scrutinized by CloudOps, NetOps, SecOps and…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters