March 17, 2021 By Sai Vennam 5 min read

By providing consistent services across environments (on-premises, cloud platform and at the edge), a distributed cloud architecture makes many points of IT friction disappear.

“Distributed cloud made real: build faster, securely, anywhere”: Join industry leaders and a special guest for this virtual event. Register now.

The world has shifted in the direction of using more cloud native (or cloud-agnostic) capabilities. There are many open source technologies like Kubernetes that run the same regardless of the cloud environment. Starting from scratch is easier today because cloud native apps are inherently easier to integrate. But most companies aren’t starting from scratch, and it is often a challenge to integrate cloud native capabilities with existing applications in on-premises data centers or different cloud environments.

Before we go deeper into that, I think it’s important to understand the basics of integration and why this growing necessity brings some significant pain to DevOps teams.

The case for application integration

In the 1990s and early 2000s, many industry-specific tools and capabilities locked customers into a particular vendor. It was hard to escape because of all the proprietary tools used to build your apps.

But guess what? People did it anyway. So, today, these companies have different teams running apps on different tools — Apache Tomcat® and JBoss®, for example — and they have to find a way to get everything to work together. That’s where integration comes in.

There are three primary ways integration happens:

  1. Application programming interface (API): Without APIs, most software today wouldn’t exist. But APIs don’t just give us access to the data, they also manage the mechanics of how applications interact with one another. So standardized rules, established contracts (API docs) and API management are important.
  2. Event-driven architectures: You can use message queuing services — like IBM MQ – or event-streaming capabilities – like open-source Apache Kafka — to set up event-driven architectures. Event-driven architectures utilize a queue that forms a middle integration layer that keeps incoming application transactions from being lost due to database constraints. This helps provide a better user experience. Learn more about the difference between event-driven architecture and event streaming.
  3. Data transfer: Synchronizing data from on-premises to the cloud can be expensive and time consuming, so having high-speed data transfer is important. Once transferred, you have to be able to access your data from your cloud-native apps.

Pain points of integration

These days, Development and Operations are very much together. That’s why “DevOps” is such a common term now. Within a team, you’ll have developers doing more than dev-oriented tasks; they’ll also be doing ops-oriented tasks related to their specific application development domain. And when you have folks working multiple ways in multiple environments, operational expenses go up. Let’s look at an example.

Say you work at a warehouse distribution service company with 2,000 data distribution centers spread out nationwide and a couple cloud data center hubs — one on the U.S. West Coast and another on the U.S. East Coast.

With each data distribution center, you might have a Kubernetes cluster running locally to keep track of what inventories are in the warehouse and what’s available. So now, with those 2,000 distribution centers, you have at least 2,000 Kubernetes clusters. Don’t forget, you also have the two hubs with main Kubernetes clusters communicating to all those edge environments.

Now, say you have a new version of an app that needs rolled out across all 2,000 distribution centers. This scenario is painful for your operations team. This is where distributed cloud comes into play.

Distributed cloud provides a single view

Distributed cloud enables teams to focus more on the actual application and development of the code, and less on the deployment and operational aspect of it. Essentially, distributed cloud means that regardless of where your Kubernetes clusters are running, you can manage all of them from a central public cloud location.

See my video for a deeper dive on distributed cloud:

Going back to our warehouse scenario, if an operations engineer wants to roll out an app update, they’ll go to one of those two hubs managing the rest of the distributed cloud and let the public cloud handle the rollout to all the edge locations. This works because your public cloud knows exactly how those edge locations and all those clusters are running.

Considering all the apps and hybrid environments of a single enterprise, there could be so many solutions as part of the overall integration portfolio, which is a problem in itself. It’s time-consuming, expensive and inefficient. What the enterprise needs is a single platform, a single pane of glass, if you will. That single view is part of a distributed cloud architecture.

Phases of integration with distributed cloud

Now, when a company has numerous technologies doing different functions for integration, they are carrying a large amount of technical debt, so to speak, and it’s a complexity the team has to manage and maintain. Distributed cloud addresses this in two general phases.

Phase 1: Freeing DevOps from the burden of platform management

Suppose you need a way to manage your public APIs, establish rate limiting and set up public gateways. Instead of investing resources to manually set up open source projects, you go with an enterprise solution. You can go with IBM API Connect® because now you know IBM Cloud is going to maintain and manage it as-a-service in the IBM public cloud. As a user, you aren’t managing it; you simply go in and use the software.

Taking advantage of the as-a-service capabilities allows your developers to focus on what matters: writing and publishing APIs. The company saves effort, time and money.

Phase 2: Consolidating is crucial for integration

Integration involves more than API management. As I mentioned, there are three main categories: API management, event-driven architectures and data transfer. And, of course, there are smaller sub-categories under all three of those.

Having different vendors for these categories means multiple environments, which creates complexity. True integration is about reducing complexity by reducing the number of pieces and consolidating as much as possible. IBM Cloud Pak® for Integration, for example, consolidates multiple tools together in a versioned package. This means you know your API management tool is going to work seamlessly with the event-driven architectures, message queuing and data transfer services.

Regardless of the platform, the need for consolidated integration is crucial. You don’t want the complexity of using multiple tools from multiple vendors and then lose time and money trying to patch everything together. The goal is that single pane of glass.

Consolidated integration with distributed cloud

How does distributed cloud tie into integration? Since the operational expense of multiple environments can be so high, with many clusters running in different places, companies look at the centralized control of a distributed cloud to solve the puzzle.

When you’re running the same version of a container across multiple edge environments, you need a single, consolidated way to get data about all of them. With a distributed cloud, it is easier to see what clusters are running, where your applications are healthy and, most importantly, the condition of the service endpoints of those clusters.

IBM Cloud Satellite is the distributed cloud offering from IBM public cloud. When you have a distributed cloud like IBM Cloud Satellite, you can simply query it to give you all of the application endpoints for all of the clusters running in all your edge locations. Just like that, it’ll output a list for you.

From there, you have a better way of integrating the apps that need to work together without wasting time on unnecessary integrations. Not all of those edge locations are talking to each other, but they do need to talk to the main hub. With IBM Cloud Satellite, you can make sure that communication is seamless without wasting time elsewhere.

The key thing to remember is you do not want to use multiple integration tools from multiple vendors. It’s expensive, time-consuming and — thanks to distributed cloud — it’s unnecessary.

Get started with IBM Cloud Satellite

Distributed cloud allows teams to have faster and easier health checks, seamless integration, better visibility and reliable management. On top of it all, distributed cloud solutions like IBM Cloud Satellite help reduce the overall pain points and operational expenses of managing multiple cloud native environments across multiple locations. That is something every DevOps team can celebrate.

Learn more about IBM Cloud Satellite.

Was this article helpful?

More from Cloud

Enhance your data security posture with a no-code approach to application-level encryption

4 min read - Data is the lifeblood of every organization. As your organization’s data footprint expands across the clouds and between your own business lines to drive value, it is essential to secure data at all stages of the cloud adoption and throughout the data lifecycle. While there are different mechanisms available to encrypt data throughout its lifecycle (in transit, at rest and in use), application-level encryption (ALE) provides an additional layer of protection by encrypting data at its source. ALE can enhance…

Attention new clients: exciting financial incentives for VMware Cloud Foundation on IBM Cloud

4 min read - New client specials: Get up to 50% off when you commit to a 1- or 3-year term contract on new VCF-as-a-Service offerings, plus an additional value of up to USD 200K in credits through 30 June 2025 when you migrate your VMware workloads to IBM Cloud®.1 Low starting prices: On-demand VCF-as-a-Service deployments begin under USD 200 per month.2 The IBM Cloud benefit: See the potential for a 201%3 return on investment (ROI) over 3 years with reduced downtime, cost and…

The history of the central processing unit (CPU)

10 min read - The central processing unit (CPU) is the computer’s brain. It handles the assignment and processing of tasks, in addition to functions that make a computer run. There’s no way to overstate the importance of the CPU to computing. Virtually all computer systems contain, at the least, some type of basic CPU. Regardless of whether they’re used in personal computers (PCs), laptops, tablets, smartphones or even in supercomputers whose output is so strong it must be measured in floating-point operations per…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters