March 9, 2020 By Stuart Cunliffe 5 min read

We know that enterprises have moved, or are moving, their workloads to the cloud. It’s rare, however, that a company wants to move all of its applications to the cloud — and even rarer that it would choose a single cloud provider to host those applications. The most common strategy is a mixture of on-premises and public cloud providers, or what is referred to as hybrid multicloud. Most organizations today are already some way along this complex journey.

In this blog post, I’ll explain how enterprise workloads are changing, what it means for businesses, and how Red Hat OpenShift can help organizations with hybrid multicloud.

First, a few definitions

A multicloud strategy encompasses cloud services from more than one cloud provider. As cloud adoption has increased, lines of business often find new ways to consume cloud capabilities to meet their specific demands for compliance, security, scalability or cost. That, along with the client’s reluctance to rely on a single provider, has led to the majority of organizations adopting a multicloud approach.

Hybrid cloud refers to a computing environment that combines private and public clouds within a single organization. Some businesses use a hybrid cloud solution to increase security, whereas others use it to match required performance to a particular application. It allows applications and data to move between these environments.

A hybrid multicloud combines these strategies, providing private and public cloud solutions from more than one provider.

A recent report by Flexera indicated that 84 percent of enterprises have a multicloud strategy, while those planning a hybrid cloud strategy grew to 58 percent.

Our workloads are changing

Not only do we have numerous options on where we can deploy our applications; we now have multiple options on how we can deploy them.

Traditionally, we had stand-alone systems that ran a single operating system, utilizing all the hardware owned by that system, and they normally provided a single-function application. Hypervisor virtualization then brought us the ability to “share” hardware resources, such as processing, memory and I/O. This meant we could pack more virtual machines onto a single server; however, each one still tended to have its own operating system and application. Containerization introduced the concept of operating system virtualization. This allowed us to deploy and run applications in a “trimmed down” environment, consuming far fewer resources than virtual machines as the containers only need to contain the runtime components required for that application.

What does that mean to an enterprise customer?

Enterprise customers are looking for ways to transform some of their traditional applications into cloud-native applications, but at the same time realize there is a need to keep certain workloads on virtual machines, be it on-premises or in the cloud. This introduces the need to deploy and manage both containerized and virtual machine workloads on the most suited hybrid multicloud environment.

Not only that, but what about managing these containers post deployment? How do we ensure they started correctly, how do we monitor them to ensure they are performing as expected, how do we allow access to those applications and how do we upgrade those applications? This gets more complex in environments where we’re running hundreds or thousands of individual containers.

This is where Red Hat OpenShift can help.

How Red Hat OpenShift can help

Red Hat OpenShift Container Platform is an enterprise-ready platform as a service built on Kubernetes. Its primary purpose is to provide users with a platform to build and scale containerized microservice applications across a hybrid cloud, and we could write numerous blog posts on all the features available within OpenShift. It can be installed on premises, in a public cloud or delivered through a managed service.

OpenShift Container Platform architecture is based around master hosts and worker nodes.

Master hosts contain the control plane components such as API Server, cluster store (etcd), controller manager, HAProxy and the like, and should be deployed in a highly available configuration to avoid single points of failure. With OpenShift Container Platform v3.11, Master hosts are RHEL/Atomic server running on IBM Power Systems or x86.

The worker nodes are managed by the master hosts and are responsible for running the application containers. These application workloads are scheduled to the worker nodes by the master hosts. With OCP v3.11, worker nodes are RHEL/Atomic/Linux on IBM Z servers running on IBM Power Systems, x86 or IBM Z. Currently, you cannot mix node architectures within the same OCP cluster.

What does OpenShift Container Platform offer?

OpenShift provides a number of deployment methods such as DeploymentConfig, StatefulSets and DaemonSets. These allows us to define how our containerized application should be deployed, including key features such as number of pods, number of replications, which images to use for those pods, scaling options, upgrade options, health checks, monitoring, service IP and routing information, port to listen on and so forth. We can then add that application template to the catalogue and allow self-service portal users to deploy it within their own project space.

We now have a declarative state describing what we want that application to look like, and OpenShift will monitor it to ensure it matches the defined state. Should it deviate from that desired state, OpenShift takes actions to resolve the issues.

Let’s take an example where an application was defined as requiring two pods in its configuration template and for some reason one of those pods terminated. The OpenShift master would notice this deviation and take action (in this case, create a new pod), as shown in the following chart:

IBM Cloud Pak for Multicloud Management and Cloud Automation Manager

Not only can Red Hat OpenShift deliver containerized applications; it gives us the ability to manage a multi-cluster environment and drive more traditional IT environments such as on-premises or public cloud virtual machines. IBM Cloud Pak for Multicloud Management allows us to manage multiple Kubernetes clusters both on premises and in the cloud, giving us a single view of all of our clusters and enabling us to perform multi-cluster management tasks.

From the Multicloud Management Cloud Pak, we’re able to deploy IBM Cloud Automation Manager (CAM) within our OpenShift cluster. Cloud Automation Manager gives us the ability to provision virtual machine based applications across multiple hybrid clouds by allowing us to register additional cloud providers, such as the on-premises IBM PowerVC (based on OpenStack) and VMware vSphere environments, or public cloud providers such as IBM Cloud, Amazon EC2, Microsoft Azure, Google Cloud and the like.

Once we have our cloud providers added, we can configure terraform based templates that define how a VM should look within the target environment. These templates can be published as service offerings to appear in the OCP catalogue as shown in the following graphic:

Figure 4: OpenShift catalog, including virtual machine options


With the combination of Red Hat OpenShift, IBM Cloud Automation Manager and IBM PowerVC, it’s possible to deliver a self-provisioning catalogue of applications similar to that shown in figure 4, providing users the ability to request multiple applications spanning a hybrid multicloud environment.

If you’re looking for support with a hybrid multicloud solution on IBM Power Systems, contact IBM Systems Lab Services today.

For further information about the journey to hybrid multicloud and IBM Power Systems, read “IBM Power Systems: Journey to the hybrid multicloud.”

Was this article helpful?

More from Cloud

Enhance your data security posture with a no-code approach to application-level encryption

4 min read - Data is the lifeblood of every organization. As your organization’s data footprint expands across the clouds and between your own business lines to drive value, it is essential to secure data at all stages of the cloud adoption and throughout the data lifecycle. While there are different mechanisms available to encrypt data throughout its lifecycle (in transit, at rest and in use), application-level encryption (ALE) provides an additional layer of protection by encrypting data at its source. ALE can enhance…

Attention new clients: exciting financial incentives for VMware Cloud Foundation on IBM Cloud

4 min read - New client specials: Get up to 50% off when you commit to a 1- or 3-year term contract on new VCF-as-a-Service offerings, plus an additional value of up to USD 200K in credits through 30 June 2025 when you migrate your VMware workloads to IBM Cloud®.1 Low starting prices: On-demand VCF-as-a-Service deployments begin under USD 200 per month.2 The IBM Cloud benefit: See the potential for a 201%3 return on investment (ROI) over 3 years with reduced downtime, cost and…

The history of the central processing unit (CPU)

10 min read - The central processing unit (CPU) is the computer’s brain. It handles the assignment and processing of tasks, in addition to functions that make a computer run. There’s no way to overstate the importance of the CPU to computing. Virtually all computer systems contain, at the least, some type of basic CPU. Regardless of whether they’re used in personal computers (PCs), laptops, tablets, smartphones or even in supercomputers whose output is so strong it must be measured in floating-point operations per…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters