Sizing Red Hat OpenShift Container Platform compute nodes
- The number of Maximo Application Suite applications and add-ons deployed.
- The size of the Maximo Application Suite workload.
- The degree of availability during a compute node outage.
Smaller workloads favor smaller compute nodes, which can still provide availability during of a compute node outage. Take, for example, the following OCP clusters.
- cluster1 with 3 compute nodes, each with 16 CPU and 64 GB of memory
- cluster2 with 6 compute nodes, each with 8 CPU and 32 GB memory
Both clusters have the same compute and memory resources allocations. During a compute node outage, cluster1 loses a third of its capacity, while cluster2 loses a sixth of its capacity. Conversely, if the workload is so large that it requires many tens of 8x32 compute nodes in cluster2, then provision fewer larger-sized computes nodes and minimize the inter-node I/O requirements. To determine the optimal compute node size, consider both factors.
Another rule to apply when you size worker nodes is to allocate 15 GB - 25 GB of disk storage per CPU allocated to the compute nodes. If insufficient disk space is allocated to the compute nodes, then pod evictions due to disk pressure are observed as the pod density per compute node increases.
Workload | CPU cores | Memory (GB) | Disk (GB) |
---|---|---|---|
Small | 4 | 16 | 100 |
Medium | 8 | 32 | 200 |
Large | 16 | 64 | 400 |
Extra Large | 32 | 128 | 800 |