Storage
The Integration capabilities provided in the IBM Cloud Pak® for Integration use persistent storage to provide reliable and resilient storage of state data. The cluster administrator must provide appropriate storage classes that meet the requirements of the respective integration capability.
This information provides guidance on the storage requirements for running the IBM Cloud Pak for Integration.
Recommended storage providers
For Linux on x86 hardware, the following recommended storage providers have been validated across all the capabilities of IBM Cloud Pak for Integration:
- OpenShift Container Storage version 4.x, from version 4.2 or higher
- Portworx Storage, version 2.5.5 or above
- IBM Cloud Block storage and IBM Cloud File storage
- IBM Storage Suite for IBM Cloud Paks. This suite of offerings includes:
- File storage from IBM Spectrum Scale
- Block storage from IBM Spectrum Virtualize, FlashSystem or DS8K
- Object storage from IBM Cloud Object Storage or Red Hat Ceph
- OpenShift Container Storage version 4.x, from version 4.2 or higher
Note: A limited entitlement for IBM Storage Suite for IBM Cloud Paks is included in the IBM Cloud Pak for Integration entitlement.
- NFS for File RWX
- IBM CSI driver that is backed by IBM XIV storage for Block RWO
- OpenShift Container Storage (OCS) 4.7 or above (except for use with MQ), including OCS through
IBM Storage SuiteNote: OCS 4.7 requires OpenShift Container Platform (OCP) 4.7.
Note: The storage components themselves are not supported directly by the IBM Cloud Pak for Integration support team. You must ensure that they have an appropriate support arrangement, so in the event that an issue is identified within the storage provider, you can engage with that provider directly.
Additional storage providers are recommended for specific components. Refer to the Notes section below the table.
IBM recommends that you use volume encryption for your chosen storage provider in order to provide protection for data at rest.
Storage requirements for the included components
| Integration capability | Storage type, file or block (see Note 2) | Access mode (see Note 1) |
Notes |
|---|---|---|---|
| Deployment and navigation interface (Platform Navigator) | N/A | N/A | Ensure that common services requirements are met. See "Logging, licensing, and related services (common services)" below. |
| Asset sharing and reuse (Asset Repository) | File for asset data Block for CouchDB |
File RWX Block RWO |
|
| Transaction tracing and troubleshooting (Operations Dashboard) | File for configuration database and shared data Block for tracing data |
File RWX Block RWO |
|
| API Management (API Connect) | Block | RWO |
|
| Application integration dashboard (App Connect Dashboard) | File | RWX |
|
| Application integration server (App Connect) | N/A | N/A | N/A |
| Application integration designer (App Connect Designer) | Block | RWO |
|
| High Speed File Transfer (Aspera HSTS) | File | RWX |
|
| Event streams (Event Streams) | Block | RWO |
|
| Queue manager (MQ) | File | RWX (multi-instance) RWO (single-instance) |
|
| Gateway (DataPower) | N/A | N/A | N/A |
| Logging, licensing, and related services (common services) | Block | RWO |
|
Notes
- Kubernetes access modes include Read Write Once (RWO), Read Write Many (RWX) and Read Only Many (ROX)
- None of the IBM Cloud Pak for Integration components require raw block device storage; in Kubernetes terms, the storage is mounted into the pod as a directory inside the container using a Volume Mode of Filesystem.
Limits on number of persistent volumes for public cloud providers
The number of persistent volumes that are permitted per public cloud region or data center is typically limited by default to prevent excessive usage. However, if required, this number can be configured with a support ticket request to allow higher numbers to be created.
Additionally there are Kubernetes default limits on the number of IaaS-provided persistent volumes that can be attached per worker node in each of the public clouds as illustrated in the following table. These per-node volume limits mean that in some environments it is necessary to deploy a larger number of worker nodes to host particular capabilities than is implied by the CPU or memory resource requirements alone.
These volume limits do not apply to Software Defined Storage (SDS) providers, such as OpenShift Container Storage (OCS) and Portworx, as those volumes are not provided by the cloud IaaS layer.
Persistent volume limits for public cloud providers
| Public cloud volume provider | Volume limit per worker node | Details |
|---|---|---|
| IBM Cloud Block storage for VPC | 12 volumes | VPC service limits |
| AWS Elastic Block Store (EBS) | 11-39 volumes depending on instance type | Instance volume limits |
| Azure Disk | 4-64 as defined by the "max data disks" per type | Azure VM sizes |
| Google Persistent Disk | 16-128 "max number of persistent disks" per type | Machine types |
The following table provides a guide for the number of persistent volumes required by each of the capabilities in a typical configuration for a high availability (HA) or non-HA deployment.
Number of persistent volumes for each component
| Integration capability | Number of volumes for non-HA example | Number of volumes for HA example | Notes |
|---|---|---|---|
| Component deployment interface (Platform Navigator) | N/A | N/A | N/A (no storage required) |
| Asset sharing and reuse (Asset Repository) | 2 | 4 | One RWX volume shared across replicas, and one RWO volume per replica |
| Transaction tracing and troubleshooting (Operations Dashboard) | 8 | 12 | 6 RWX volumes shared across replicas, and 1 RWO volume each per replica of Data and Master nodes (i.e. 3 replicas of each for HA = 6 RWO volumes) |
| API Lifecycle Management (API Connect) | 12 | 40 | Mgmt: 3 per node + 1 shared, Portal: 5 per node, Analytics: 2 per node
Gateway: 1 per node for non-HA, 3 per node for HA |
| Application integration dashboard (App Connect Dashboard) | 1 | 1 | Single RWX volume is shared across Dashboard replicas |
| Application integration server (App Connect) | N/A | N/A | N/A |
| Application integration designer (App Connect Designer) | 1 | 3 | One RWO volume per CouchDB replica |
| High Speed File Transfer (Aspera HSTS) | 1 | 7 | One RWX volume is shared across replicas, and for HA one optional Redis volume per master/worker replica |
| Event streaming (Event Streams) | 2 | 6 | One volume per Kafka broker and one per ZooKeeper instance (HA example is 3 broker, 3 ZooKeeper) |
| Queue manager (MQ) | 1-4 | 1-4 | 1-4 volumes per queue manager depending on the desired data separation for performance or data security reasons |
| Gateway (DataPower) | N/A | N/A | N/A |
| Logging, licensing, and related services (common services) | 6 | 6 | Medium profile is currently used for both for non HA and HA scenarios |