Structuring your deployment
This topic provides guidance and best practices for planning your installation and use of IBM Cloud Pak® for Integration.
Here are some of the questions that you might need to consider when you plan your deployment:
- Do you want to install the Long Term Support or Continuous Delivery version of Cloud Pak for Integration?
- How many IBM Red Hat OpenShift clusters do you need?
- How should you deploy the Cloud Pak for Integration operators and instances within those clusters?
- Can you share clusters between different user namespaces, environments, or lifecycle phases?
- What are the implications of sharing a cluster between two or more logically separate activities?
- How can you make the best use of namespaces (OpenShift projects)?
Contents:
- Terminology
- Considerations before deployment
- Recommended installation approach
- Alternative installation approach (common for non-production)
Terminology
- namespace
- A Kubernetes object that, together with other namespaces, divides a single Kubernetes cluster into multiple virtual clusters. This Kubernetes term is equivalent to the project term in Red Hat OpenShift. The phrases "in a namespace" and "in all namespaces" in this topic refer to the Red Hat OpenShift operator installation modes shown as A specific namespace on the cluster and All namespaces on the cluster in the Red Hat OpenShift interface. For more information, see Namespaces in the Kubernetes documentation and Adding Operators to a cluster in the Red Hat OpenShift documentation.
- operator
- An operator provides custom resource types in a Kubernetes cluster. Operators react to custom resources (CRs) to create managed resources. For more information, see Operators on Red Hat® OpenShift® article on the Red Hat website.
- pod
-
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. For more information, see Pods in the Kubernetes documentation.
Sharing infrastructure across workloads in Red Hat OpenShift
Although many organizations successfully use namespaces to provide a degree of separation between teams, workloads, and namespaces, Red Hat OpenShift itself does not natively support strong isolation of different workloads running in a single cluster. This challenge applies equally for workloads with any of the following aspects:
- Different environments in the development lifecycle for an application (such as development, test, and production environments for the same namespace).
- Different user applications at the same environment (for example, development for Application A versus Application B versus Application C).
- Workloads for different customer organizations in the traditional sense of SaaS or hosted service multi-tenancy.
Comparing Red Hat OpenShift with a traditional deployment (on virtual machines) helps clarify the challenge of providing workload isolation:
- A Red Hat OpenShift cluster does not offer the same isolation as a configuration where different VMs are deployed for each workload, even if those VMs are running on the same hypervisor. For example, pods from different namespaces can share a worker node and compete for resources, and cluster-level configuration elements such as CustomResourceDefinition, global pull secrets, and ImageContentSourcePolicy affect all namespaces.
- Because Kubernetes worker nodes are often VMs themselves, the equivalent scenario is running multiple workloads inside the same VM.
Considerations before deployment
- Long Term Support or Continuous Delivery releases
-
Continuous Delivery (CD) releases offer more frequent access to new features. Long Term Support (LTSR) releases provide extended support on the same version of Red Hat OpenShift Container Platform. For more information about which releases of Cloud Pak for Integration are LTSR or CD, see IBM Cloud Pak for Integration Software Support Lifecycle Addendum on the IBM® Support website.
- Network policies
-
Cloud Pak for Integration 2022.2 does not support a deny-all network policy by default. Select a network policy that meets your organization's requirements while supporting optimal system performance. If you apply a deny-all network policy, ensure that you create network policies to re-open the correct traffic between Cloud Pak for Integration pods.
- User roles and organizational structure
-
To ensure your deployment approach addresses the needs of various user roles, you might consider the organizational structure of your company:
- Does a single administration team manage both the Red Hat OpenShift cluster and the product-level administration of capabilities (such as application integration or messaging), or are these responsibilities handled by different teams?
- Is there a single team for all capabilities, or does each type have its own specialized admin team?
- How much access is required to the Red Hat OpenShift layer by the application
developers, who are the ultimate consumers of the instances that are deployed?
- Some capabilities, such as App Connect, might require frequent Red Hat OpenShift access by developers to deploy updated integration flows for testing.
- For other capabilities, like IBM MQ, API Connect, and DataPower®, the server is more commonly configured up-front. In this case, developers can be granted product-level access inside that deployment (rather than at the Red Hat OpenShift level) to complete self-service tasks.
- OpenShift Container Platform (Kubernetes-based)
-
There are several technical characteristics of OpenShift Container Platform that are inherited from the upstream Kubernetes project which affect the ability to provide full isolation between tenants.
You should consider these constraints:- Network bandwidth cannot be easily isolated to a specific workload.
- Custom resources (CRs) that are deployed in a namespace might over-consume CPU, memory, which impacts pods in other namespaces that are running on the same node. This problem is difficult to prevent. See Node placement for guidance on placing Cloud Pak instances on specific nodes.
- CRs can also contend for Ephemeral storage on a worker node. Running out of ephemeral storage can cause the eviction of pods, impacting end-users.
- A cluster runs a single version of Red Hat OpenShift Container Platform (OCP) that applies to all namespaces. For this reason, all workloads must be ready to support new versions of OCP at the same time, and any issues introduced by upgrading to a new version of OCP will apply immediately to everything that is deployed there.
- Operators for Red Hat OpenShift and Cloud Pak for Integration
-
An operator consists of the following components:
- Custom resource definitions (CRDs) that define OpenShift objects. CRDs are cluster-scope objects and can not be constrained to individual namespaces.
- Controllers that manage custom resources which are instances of objects defined by the
CRD.
Because in Red Hat OpenShift, CRDs are objects that are shared across all namespaces, if you use two controllers that manage the same kind of object in different namespaces, the CRD must be compatible with both controllers. In general, the operator’s conversion webhook converts between the versions of the API provided by the CRD. However, having two controllers does increase the complexity of the system and with that, the chance of encountering unexpected errors.
Within Red Hat OpenShift, both operators and dependencies are installed and managed by Operator Lifecycle Manager (OLM). OLM enforces the following rules for installation:- Because each individual CR can be managed by only one controller, if an operator is installed in a specific namespace, it cannot also be installed in all namespaces.
- OLM installs an operator only if its CRs and its dependent CRs are each managed only by one controller. So, you cannot install Cloud Pak for Integration in all namespaces if any of its dependencies, for example, Couch DB, are installed in a specific namespace.
The implications of installing a Cloud Pak for Integration operator in all namespaces versus in a specific namespace are as follows:- If an operator is installed in all namespaces, its controller manages all of its owned CRs within all namespaces, and there can only be one instance of that operator.
- Unlike some operators, the Cloud Pak for Integration operator can manage multiple
CRs and multiple CR versions. Cloud Pak for Integration operators are tested to be
compatible with previous CR versions. So, when you upgrade the Cloud Pak for Integration operator, the following benefits apply:
- The operator can manage all the CRs in the cluster (not just the latest versions).
- The operator doesn't upgrade the application automatically (this can cause CRs to become invalid); instead, you update the CR to upgrade the application. (Note that this behavior does not apply to the Event Streams operator).
- If an operator is installed in a specific namespace, its controller manages only the CRs that the operator owns in that namespace. So, you can install copies of the operator in more than one namespace.
- IBM Cloud Pak for Integration operator
-
After you install and deploy the Platform UI, you can use it to easily deploy all available capabilities.
You can install the IBM Cloud Pak for Integration operator either in all namespaces or in a specific namespace on the cluster. For each IBM Cloud Pak for Integration operator that you install, you can create only one Platform UI instance, as follows:- If you install the operator in all namespaces, you can install only one Platform UI instance per cluster. The Platform UI instance displays instances of capabilities from the whole cluster.
- If you install the operator in a specific namespace or namespaces on the cluster, you must create a separate Platform UI instance in each namespace where it is required. The Platform UI instance displays instances of capabilities from that namespace.
The Platform UI respects the access rights that are granted to each authenticated user of the cluster. So, the Platform UI shows a user only the instances to which they were granted access.
- IBM Cloud Pak foundational services
-
The IBM Cloud Pak foundational services in IBM Cloud Pak for Integration enable functions such as authentication, authorization, and licensing.
By default, only one instance of Cloud Pak foundational services is deployed per cluster. This instance is installed into a namespace called ibm-common-services and is shared between installations. You can configure multiple instances of Cloud Pak foundational servicesby using custom namespaces. See Installing in a custom namespace.
The Cloud Pak foundational services instance is typically configured to integrate with the corporate LDAP server. If so configured, the instance can adopt the same user registry and group structure that is used elsewhere in the installation environment.
Recommended deployment approach
- Deploy a separate cluster for the environment in each stage of the development lifecycle
-
To restrict access to customer data and live systems, and to prevent any non-production activities (development or test, for example) from affecting customer-facing endpoints, industry best practice is to deploy a production workload on different infrastructure from non-production deployments. Even within non-production environments, the safest option is to use a separate cluster for each environment.
This practice offers the best separation and flexibility for implementing new functions without the risk of affecting established environments. For example, to upgrade to a new version of OpenShift Container Platform without the risk of breaking your test environment, use a separate cluster for development.
Note: Using separate clusters for each environment requires deploying and managing extra resources, so users commonly share a cluster for certain environments. This practice trades isolation (improved security) for reduced infrastructure costs. You should have a clear understanding of the risks that are involved before you embark on this approach. - When you deploy a separate cluster in each environment, install operators across all namespaces
-
When a cluster has a single purpose, there is little benefit to deploying operators in a specific namespace on the cluster. Installing operators across all namespaces makes the deployment simpler and more manageable.
If the IBM Cloud Pak for Integration operator is installed in all namespaces, the following points apply:- There can only be one instance of the Platform UI for that cluster.
- The Platform UI instance provides access-controlled management to CR instances that are deployed across all namespaces in the cluster.
Note: If the operator for any component or dependency of IBM Cloud Pak for Integration is installed in all namespaces with OLM, then all component dependencies must also be installed in all namespaces with OLM. - Use a namespace for each logical grouping of users who manage deployments in the cluster
-
If a single team is managing all deployed instances, this guideline means using a single namespace as a starting point, subject to a need to subdivide into more than one namespace. If multiple, independent teams exist, you can use different namespaces to hold each team's deployments. This arrangement benefits from the role-based access control (RBAC) that is provided by Red Hat OpenShift for each namespace.
Depending on your organizational structure (for example, whether a single team covers all product capabilities or whether each team has its own specialized administration team), you might need a specific namespace. For example, you might need a specific namespace for the following teams:- A cross-functional team that manages instances of various different IBM Cloud Pak for Integration instances for one application.
- A domain-specific team that manages a single type of capability (for example, IBM MQ) on behalf of multiple applications.
Grouping a set of instances into a namespace allows the administrator to efficiently apply controls for things like access permissions, resource quotas (for resources such as CPU and memory), and network policies, and to more easily filter log output.
Create supporting configuration items, such as Secrets and Config Maps, only in the namespace in which they are required by the CR instances. This action restricts access to the minimum necessary set of users and workloads.
- Subdivide your deployment into multiple namespaces, even if the same team of people is managing instances in all those namespaces
-
Using a specific namespace for each application domain or business domain can make it easier to group smaller numbers of instances together or to help match the scope of other consumers (such as application developers).
Using multiple namespaces can be particularly beneficial if there are large numbers (for example, hundreds) of instances.
If you are installing software other than IBM Cloud Pak for Integration in the same cluster, remember that IBM Cloud Pak for Integration must be installed in a specific namespace on the cluster if any of its dependencies are installed in a specific namespace. This limitation on installing operators in all namespaces might impact any additional (separate) use you might make of the same dependencies.
Alternative installation approach (common for non-production)
- A cluster can be shared between different environments in the development lifecycle, and between different teams
-
Typically, you share a cluster between different workloads by using a different namespace for each environment. This arrangement maintains logical separation between the different deployments.
Because network bandwidth cannot be easily isolated to a specific workload, Red Hat OpenShift does not provide perfect multi-tenancy within a cluster. This increases the risk that activity, such as performance testing or testing of an upgrade process, in one namespace has an impact on workloads that are running in other namespaces. One potential drawback of sharing a cluster is that when you need to upgrade to a new version of OpenShift Container Platform, any issues or failures that are introduced immediately affect all the environments (such as development and test environments) that are running in that cluster.
An inefficient use of resources can occur in the following situations:- When separate clusters are used in smaller organizations, where each environment contains a limited number of instances. This arrangement can be an issue even when small worker nodes are used.
- In non-production environments, where the impact of problems that are caused by interference between the environments seem easier to handle. For example, where a performance run in a test environment is run outside normal working hours for the developers.
- In a shared cluster approach, you can benefit from installing operators in a specific namespace instead of in all namespaces
-
You can have different versions of the IBM Cloud Pak for Integration operator in different namespaces. This independence means that you can maintain separate upgrade cycles in different namespaces. For example, you can try out a new version of Cloud Pak for Integration alongside an existing deployment.Note: Because the Operator CRDs are objects that are in all namespaces on the cluster, upgrade behavior is not completely isolated between namespaces. A change in the CRD applies to operators in all namespaces, affecting the respective deployments. So, you can independently upgrade the Platform UI instance in different namespaces, and the Platform UI displays only CR instances that are running in the same namespace. If you choose to use a single namespace, do not reinstall a lower operator version than the latest operator version in the whole cluster.The following exceptions apply:
- In this setup, you must create one Platform UI instance in each namespace. The Platform UI is a stateless component, so changes in the Platform UI version are unlikely to affect running workloads. For this reason, the risk that is involved in running Platform UI in all namespaces is low.
- You can install IBM Cloud Pak foundational services in only one namespace per cluster, so any changes in the Cloud Pak foundational services version apply to deployments in all namespaces.
- If you install an operator in a specific namespace, you cannot also install it in all namespaces.
- A single instance of a resource-intensive component can be shared across multiple user domains to optimize resource requirements
-
In high-availability configurations, or any situation where multiple user namespaces need to use resource-intensive components, such as API Connect, consider deploying only one instance of each component and allowing shared usage by the essential user workloads.
You can also use the multi-tenancy-style features of Red Hat OpenShift that allow separation by using a different provider organization per namespace in API Connect. However, there are limits to the tenant isolation that is provided in this scenario.
Note: Sharing an instance might add complexity to the implementation of the applications that use that instance. For example, it might be possible to dynamically configure the API endpoint that is used, rather than depending on a single, fixed name throughout the environments.