March 9, 2020 By Stuart Cunliffe 5 min read

We know that enterprises have moved, or are moving, their workloads to the cloud. It’s rare, however, that a company wants to move all of its applications to the cloud — and even rarer that it would choose a single cloud provider to host those applications. The most common strategy is a mixture of on-premises and public cloud providers, or what is referred to as hybrid multicloud. Most organizations today are already some way along this complex journey.

In this blog post, I’ll explain how enterprise workloads are changing, what it means for businesses, and how Red Hat OpenShift can help organizations with hybrid multicloud.

First, a few definitions

A multicloud strategy encompasses cloud services from more than one cloud provider. As cloud adoption has increased, lines of business often find new ways to consume cloud capabilities to meet their specific demands for compliance, security, scalability or cost. That, along with the client’s reluctance to rely on a single provider, has led to the majority of organizations adopting a multicloud approach.

Hybrid cloud refers to a computing environment that combines private and public clouds within a single organization. Some businesses use a hybrid cloud solution to increase security, whereas others use it to match required performance to a particular application. It allows applications and data to move between these environments.

A hybrid multicloud combines these strategies, providing private and public cloud solutions from more than one provider.

A recent report by Flexera indicated that 84 percent of enterprises have a multicloud strategy, while those planning a hybrid cloud strategy grew to 58 percent.

Our workloads are changing

Not only do we have numerous options on where we can deploy our applications; we now have multiple options on how we can deploy them.

Traditionally, we had stand-alone systems that ran a single operating system, utilizing all the hardware owned by that system, and they normally provided a single-function application. Hypervisor virtualization then brought us the ability to “share” hardware resources, such as processing, memory and I/O. This meant we could pack more virtual machines onto a single server; however, each one still tended to have its own operating system and application. Containerization introduced the concept of operating system virtualization. This allowed us to deploy and run applications in a “trimmed down” environment, consuming far fewer resources than virtual machines as the containers only need to contain the runtime components required for that application.

What does that mean to an enterprise customer?

Enterprise customers are looking for ways to transform some of their traditional applications into cloud-native applications, but at the same time realize there is a need to keep certain workloads on virtual machines, be it on-premises or in the cloud. This introduces the need to deploy and manage both containerized and virtual machine workloads on the most suited hybrid multicloud environment.

Not only that, but what about managing these containers post deployment? How do we ensure they started correctly, how do we monitor them to ensure they are performing as expected, how do we allow access to those applications and how do we upgrade those applications? This gets more complex in environments where we’re running hundreds or thousands of individual containers.

This is where Red Hat OpenShift can help.

How Red Hat OpenShift can help

Red Hat OpenShift Container Platform is an enterprise-ready platform as a service built on Kubernetes. Its primary purpose is to provide users with a platform to build and scale containerized microservice applications across a hybrid cloud, and we could write numerous blog posts on all the features available within OpenShift. It can be installed on premises, in a public cloud or delivered through a managed service.

OpenShift Container Platform architecture is based around master hosts and worker nodes.

Master hosts contain the control plane components such as API Server, cluster store (etcd), controller manager, HAProxy and the like, and should be deployed in a highly available configuration to avoid single points of failure. With OpenShift Container Platform v3.11, Master hosts are RHEL/Atomic server running on IBM Power Systems or x86.

The worker nodes are managed by the master hosts and are responsible for running the application containers. These application workloads are scheduled to the worker nodes by the master hosts. With OCP v3.11, worker nodes are RHEL/Atomic/Linux on IBM Z servers running on IBM Power Systems, x86 or IBM Z. Currently, you cannot mix node architectures within the same OCP cluster.

What does OpenShift Container Platform offer?

OpenShift provides a number of deployment methods such as DeploymentConfig, StatefulSets and DaemonSets. These allows us to define how our containerized application should be deployed, including key features such as number of pods, number of replications, which images to use for those pods, scaling options, upgrade options, health checks, monitoring, service IP and routing information, port to listen on and so forth. We can then add that application template to the catalogue and allow self-service portal users to deploy it within their own project space.

We now have a declarative state describing what we want that application to look like, and OpenShift will monitor it to ensure it matches the defined state. Should it deviate from that desired state, OpenShift takes actions to resolve the issues.

Let’s take an example where an application was defined as requiring two pods in its configuration template and for some reason one of those pods terminated. The OpenShift master would notice this deviation and take action (in this case, create a new pod), as shown in the following chart:

IBM Cloud Pak for Multicloud Management and Cloud Automation Manager

Not only can Red Hat OpenShift deliver containerized applications; it gives us the ability to manage a multi-cluster environment and drive more traditional IT environments such as on-premises or public cloud virtual machines. IBM Cloud Pak for Multicloud Management allows us to manage multiple Kubernetes clusters both on premises and in the cloud, giving us a single view of all of our clusters and enabling us to perform multi-cluster management tasks.

From the Multicloud Management Cloud Pak, we’re able to deploy IBM Cloud Automation Manager (CAM) within our OpenShift cluster. Cloud Automation Manager gives us the ability to provision virtual machine based applications across multiple hybrid clouds by allowing us to register additional cloud providers, such as the on-premises IBM PowerVC (based on OpenStack) and VMware vSphere environments, or public cloud providers such as IBM Cloud, Amazon EC2, Microsoft Azure, Google Cloud and the like.

Once we have our cloud providers added, we can configure terraform based templates that define how a VM should look within the target environment. These templates can be published as service offerings to appear in the OCP catalogue as shown in the following graphic:

Figure 4: OpenShift catalog, including virtual machine options

Conclusion

With the combination of Red Hat OpenShift, IBM Cloud Automation Manager and IBM PowerVC, it’s possible to deliver a self-provisioning catalogue of applications similar to that shown in figure 4, providing users the ability to request multiple applications spanning a hybrid multicloud environment.

If you’re looking for support with a hybrid multicloud solution on IBM Power Systems, contact IBM Systems Lab Services today.

For further information about the journey to hybrid multicloud and IBM Power Systems, read “IBM Power Systems: Journey to the hybrid multicloud.”

Was this article helpful?
YesNo

More from Cloud

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters