My IBM Log in
Hybrid and Portable

Hybrid and Portable

IBM Well-Architected Framework
Hybridity Leadspace Image -  Well-architected Framework
Overview

Overview

Hybrid is the ability to run workloads in and across one or more public clouds, private clouds and on-premises. Hybrid cloud provides orchestration, management and application portability across public, private and on-premises infrastructure. The result is a single, unified and flexible distributed computing environment where an organization can run and scale its traditional or cloud-native workloads on the most appropriate computing model.

Portable is the ability to move workloads and data between cloud computing environments, enabling for migration from one public cloud to another or even to a private cloud without requiring (significant) reconfiguration of the workload.

The combination of hybrid and portable principles facilitates interoperability. This encompasses platform, data, and application. Applications that are built using design principles have a higher degree of interoperability.

Enterprises adopt a multi-cloud strategy to prevent vendor lock-in or because of data residency requirements. And they want the ability to lift and shift workloads or migrate workloads from one cloud to another or run applications on multiple clouds. Designing cloud applications with portability and interoperability in mind is key for such enterprises.

More on the IBM Hybrid Cloud Point of View

Principles

Workloads should be built once and deployed consistently everywhere. Workloads do not need to be altered or configured to fit within the services and constraints of the platform on which they are hosted. That is in fact the mantra of IBM Red Hat OpenShift, a hybrid cloud container platform that is available on almost every public cloud.The rest of the principles in the pillar build on achieving this ideal.

Containers decouple workloads from the underlying infrastructure that hosts them. This makes containerized workloads portable to any platform that hosts a compatible container runtime and thus containers are the preferred packaging and deployment technology for applications.

Kubernetes is an open source platform for deploying and managing containerized workloads deployed on any container engine that supports the Kubernetes Container Runtime Interface (CRI). Kubernetes and kubernetes-derived management platforms can be used to manage containerized workloads on public and private clouds, as well as private infrastructure.

Workloads that rely on a specific representation or type of infrastructure are only portable to environments that have that infrastructure. Use a cloud-agnostic coding tool for defining the environment or how applications should be configured. Then use the automation tool to deploy consistently across any cloud and even on-premises. Using Infrastructure as Code (IaC) to deploy applications makes it easy to deploy on any cloud or to move from one cloud to another. Another advantage of using IaC is that it helps with scaling and preventing configuration drift. It also makes the deployment process more portable.

Workloads that rely on platform-native services are 'locked' to the platform. Solutions should instead adopt industry-standardized service APIs and interfaces that are independent of the platform on which the service is deployed.

Practices

To truly embrace portability, teams might consider adopting open-source solutions as part of the stack to minimize lock-in which would otherwise complicate it.

Open source software might be chosen for its low (or no) cost, the flexibility to customize the source code, or the existence of a large community supporting the application. However, they can incur other costs—typically for network integration, end-user and IT support, and other services typically included with proprietary software. Overall, this decision should be balanced with the business case and likelihood of moving workloads.

  1. Solutions should use abstraction layers where appropriate.
  2. Solutions should avoid using cloud service provider native supporting services. If this can't be avoided, highly decoupled architecture is suggested to allow for layer-based substitution of capabilities as business and technical needs change.
  3. Define standard reference architectures that sanction appropriate platforms and services.
  4. Use enterprise architecture or design authority to define minimum standards for portability.
  5. Establish "wrapper services" or customized services that use commodity services where possible, and if feasible, or reduce risk of external cloud service provider exposure.

Choosing a deployment platform, like Red Hat OpenShift, that's common across all environments is fundamental to achieve portability and interoperability across multi-cloud and on-premises environments. Well-architected solutions use containers as their common deployment unit due to containers small size and wide-spread support across the majority of operating systems, cloud service providers, and infrastructure platforms.

Understanding containers

Containers, like Docker, decouple workloads from the underlying infrastructure that hosts them. This makes containerized workloads portable to any platform that hosts a compatible container runtime.

A container orchestration platform like Kubernetes automates the deployment of applications across cloud environments. It can also manage containerized workloads. Kubernetes is an open source platform for deploying and managing containerized workloads deployed on any container engine that supports the Kubernetes Container Runtime Interface (CRI). Kubernetes and kubernetes-derived management platforms are used to manage containerized workloads on public and private clouds, as well as on-premises infrastructure.

Kubernetes Overview

Configuring and deploying on various clouds can be daunting and day 2 operations could be overwhelming. Cloud-agnostic tools like Ansible, Terraform, Chef, Puppet are a must. Ansible is an open source IT automation tool that automates provisioning, configuration management, application deployment and orchestration in the cloud and on-premises. It can be used to automate the installation of software, provision infrastructure and even improve security and compliance by timely patching of systems.

Provisioning hundreds or thousands of servers manually is not feasible which is why Ansible playbooks preferred by enterprises looking to scale IT quickly and reliably. With an Ansible playbook you can build one instance, then instantly use the same instance or any number of additional servers employing the same infrastructure parameters.

What is Ansible? How Ansible works

Configuring a distributed cloud that enables you to consume cloud services anywhere you need them and to deploy, manage, and control workloads across on-premises environments, edge computing environments, and public cloud environments from any cloud vendor. IBM Cloud Satellite delivers cloud services, APIs, access policies, logging, and monitoring across all locations. This enables all the advantages of cloud like service for your on prem infrastructure and brings in flexibility to move those workloads on other choice provider clouds based on policy. With the proliferation of edge devcies and tremendous increase in data, the portability of moving compute closer to the data gravity is more cost effective. It also brings in enhanced security and simplified Day 2 operations.

IBM Cloud Satellite

While security is a separate pillar, it is something that permeates every aspect of cloud computing no matter if it's private cloud, public cloud or on-premises infrastructure. The flexibility of moving workloads to different environments comes with added security challenges. Security can't be sacrificed at the cost of portability. One has to take a layered, defense-in-depth security strategy across the entire infrastructure and application stack and life cycle.

Red Hat hybrid cloud security approach
Next steps