March 30, 2022 By Adam Jollans 3 min read

Create applications across hybrid cloud, on-premises and at the edge.

Digital transformation has undergone 10 years’ worth of change in just the past few years, such as the need to enable remote access to services and to connect people from their homes.

How can organizations keep up with the change to “digital-first” and deliver new business value faster and more effectively to stay competitive, while also reducing costs?

More specifically, how can IT leaders:

  • Modernize applications to make them easier to build and maintain?
  • Optimize IT infrastructure to share resources more effectively?
  • Support portability of workloads across multiple clouds to protect investments?
  • Automate and manage workloads from core to cloud to edge?

Cloud-native applications on hybrid cloud

A revolution has been taking place in how applications are built, deployed and managed. Cloud-native development was adopted in public cloud as a faster, more agile and more reliable way to develop the next generation of applications.

Cloud-native applications are built on three key technologies to deliver flexibility:

  1. Containers, to package applications with their dependencies to run anywhere.
  2. Microservices, to build applications out of loosely coupled services.
  3. Orchestration, to deploy and manage containerized applications at scale.

But what is not discussed broadly is that cloud-native applications can also be built and deployed in the data center, on private clouds and at the edge. Or that these new applications can access existing data to build the mission-critical systems of tomorrow.

When this technology is combined with consistent development tools, portability across platforms and common operational skills, it enables a new approach to developing workloads across the hybrid cloud.

Cloud-native workloads can be optimized for hardware architectures, including IBM zSystems and LinuxONE, IBM Power, x86 and Arm. They can also be co-located with data to maximize performance and application management and to support data residency requirements.

Building a hybrid cloud platform

The first step in building cloud-native applications that can run anywhere is a hybrid cloud platform that spans all of the possible deployments. This provides the framework for building and deploying applications and services and spans the entirety of the hybrid cloud­ — from core to cloud to edge.

We see an open-source foundation as essential for enabling future flexibility, leveraging community innovation and providing consistency across client development teams. That’s why many cloud platforms are built on open-source components like Linux, containers and Kubernetes. The open-source components then need to be integrated together, hardened for enterprise workloads and made easy to use and manage.

IBM’s approach to delivering a hybrid cloud platform is based on Red Hat OpenShift. Red Hat OpenShift is the industry’s leading enterprise Kubernetes platform, providing a consistent foundation for building, deploying and managing applications across the hybrid cloud. Red Hat OpenShift 4.10 became available in March 2022 and improves installer flexibility, automated operations and workload extensibility.

Choosing a hybrid cloud infrastructure

Underpinning the hybrid cloud platform is the infrastructure on which it runs — public or private cloud, traditional infrastructure and edge. It’s important that the hybrid cloud platform runs on all IT infrastructure essential to the business, rather than just a single public cloud or on-premises server. This enables existing data and applications to be part of the hybrid cloud alongside new cloud-native applications. It also allows flexibility of placement of workloads to best match the infrastructure, and it avoids the risk of vendor lock-in.

Red Hat OpenShift runs on the leading public clouds, including IBM Cloud, where Red Hat OpenShift on IBM Cloud provides fully automated container hosting. It can then be extended to on-prem, edge or public cloud environments with IBM Cloud Satellite.

Red Hat OpenShift also runs on-premises — such as on IBM Power, IBM zSystems and IBM LinuxONE — close to existing data and applications. IBM Cloud Infrastructure Center then provides IaaS capability for Linux on IBM zSystems and can help simplify the Red Hat OpenShift installation experience. IBM has also recently announced an offering called IBM zCX Foundation for Red Hat OpenShift that enables Red Hat OpenShift applications to additionally run in the z/OS address space and be supported by IBM.

IBM Spectrum Fusion and Red Hat OpenShift Data Foundation offer persistent data support for Red Hat OpenShift with a container-native hybrid cloud data platform for Red Hat OpenShift applications.

Deploying hybrid cloud software

The business value is then delivered by the hybrid cloud software workloads, which can be containerized to run on top of the hybrid cloud platform. These workloads can include databases and automation software as well as ISV and business applications. Once containerized, they can take advantage of the scalability and orchestration provided by tools like Kubernetes.

IBM has containerized its key software to run on Red Hat OpenShift across multiple hardware architectures and has packaged them together into a range of AI-powered IBM Cloud® Paks.

Red Hat hosts an open software marketplace for hybrid cloud applications from ISVs called the Red Hat Marketplace.

Running cloud-native applications everywhere

Cloud-native applications are no longer just for public cloud. The availability of a hybrid cloud platform that runs across public cloud, private cloud and traditional infrastructure has opened the possibility of a common approach to developing applications across the hybrid cloud — helping enable faster delivery of new value to businesses and their customers.

Learn more about IBM and Red Hat solutions.

Was this article helpful?

More from Cloud

Microcontrollers vs. microprocessors: What’s the difference?

6 min read - Microcontroller units (MCUs) and microprocessor units (MPUs) are two kinds of integrated circuits that, while similar in certain ways, are very different in many others. Replacing antiquated multi-component central processing units (CPUs) with separate logic units, these single-chip processors are both extremely valuable in the continued development of computing technology. However, microcontrollers and microprocessors differ significantly in component structure, chip architecture, performance capabilities and application. The key difference between these two units is that microcontrollers combine all the necessary elements…

Seven top central processing unit (CPU) use cases

7 min read - The central processing unit (CPU) is the computer’s brain, assigning and processing tasks and managing essential operational functions. Computers have been so seamlessly integrated with modern life that sometimes we’re not even aware of how many CPUs are in use around the world. It’s a staggering amount—so many CPUs that a conclusive figure can only be approximated. How many CPUs are now in use? It’s been estimated that there may be as many as 200 billion CPU cores (or more)…

Prioritizing operational resiliency to reduce downtime in payments

2 min read - The average lost business cost following a data breach was USD 1.3 million in 2023, according to IBM’s Cost of a Data Breach report. With the rapid emergence of real-time payments, any downtime in payments connectivity can be a significant threat. This downtime can harm a business’s reputation, as well as the global financial ecosystem. For this reason, it’s paramount that financial enterprises support their resiliency needs by adopting a robust infrastructure that is integrated across multiple environments, including the…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters