Storage

The Integration capabilities provided in the IBM Cloud Pak® for Integration use persistent storage to provide reliable and resilient storage of state data. The cluster administrator must provide appropriate storage classes that meet the requirements of the respective integration capability.

For guidance on the storage requirements for running IBM Cloud Pak for Integration. see:

Supported storage options by cloud provider

Details on supported storage options for IBM Cloud, Microsoft Azure, and Amazon Web Services (AWS) are provided here:

Limits on number of persistent volumes for public cloud providers

The number of persistent volumes that are permitted per public cloud region or data center is typically limited by default to prevent excessive usage. However, if required, this number can be configured with a support ticket request to allow higher numbers to be created.

Additionally there are Kubernetes default limits on the number of IaaS-provided persistent volumes that can be attached per worker node in each of the public clouds as illustrated in the following table. These per-node volume limits mean that in some environments it is necessary to deploy a larger number of worker nodes to host particular capabilities than is implied by the CPU or memory resource requirements alone.

These volume limits do not apply to Software Defined Storage (SDS) providers, such as OpenShift Container Storage (OCS) and Portworx, as those volumes are not provided by the cloud IaaS layer.

Persistent volume limits for public cloud providers

Public cloud volume provider Volume limit per worker node Details
IBM Cloud Block storage for VPC 12 volumes VPC service limits
AWS Elastic Block Store (EBS) 11-39 volumes depending on instance type Instance volume limits
Azure Disk 4-64 as defined by the "max data disks" per type Azure VM sizes
Google Persistent Disk 16-128 "max number of persistent disks" per type Machine types

The following table provides a guide for the number of persistent volumes required by each of the capabilities in a typical configuration for a high availability (HA) or non-HA deployment.

Number of persistent volumes for each capability or runtime

Integration capability or runtime Number of volumes for non-HA Number of volumes for HA Notes
Deployment and navigation interface (Platform Navigator) N/A N/A N/A (no storage required)
Automation assets (Automation foundation assets) 2 4 One RWX volume shared across replicas, and one RWO volume per replica
Integration tracing (Operations Dashboard) 8 12 6 RWX volumes shared across replicas, and 1 RWO volume each per replica of Data and Master nodes (i.e. 3 replicas of each for HA = 6 RWO volumes)
API management (API Connect) 12 40 Mgmt: 3 per node + 1 shared, Portal: 5 per node, Analytics: 2 per node

Gateway: 1 per node for non-HA, 3 per node for HA

Application integration dashboard (App Connect Dashboard) 1 1 Single RWX volume is shared across Dashboard replicas
Application integration server (App Connect) N/A N/A N/A
Application integration designer (App Connect Designer) 1 3 One RWO volume per CouchDB replica
High speed transfer server (Aspera HSTS) eve 1 7 One RWX volume is shared across replicas, and for HA one optional Redis volume per master/worker replica
Event streaming (Event Streams) 2 6 One volume per Kafka broker and one per ZooKeeper instance (HA example is 3 broker, 3 ZooKeeper)
Messaging (MQ) 1-4 1-4 1-4 volumes per queue manager depending on the desired data separation for performance or data security reasons
Gateway (DataPower Gateway) N/A N/A N/A
Automation foundation core (Authorization and common UI) 1 1 See System Requirements in the Automation foundation core documentation for more detailed sizing information.
Platform UI, monitoring, licensing, and user management (foundational services) 3 3 "Small" profile is currently used for both for non HA and HA scenarios. See Sizing for foundational services for more information.