Table of contents

Scaling services

You can adjust IBM® Cloud Pak for Data services by scaling the resources that they use to support high availability or to increase processing capacity. Resources can be scaled based on predefined resource configurations.

Before you begin

Required role: To complete this task, you must be an OpenShift® project administrator.

About this task

If a service supports scaling, you can scale the service at any time after you install it.

The following predefined scaling configurations are provided for services that support scaling:
  • Small (default configuration)
  • Medium
  • Large
Note: Some services might use a different default value, different predefined sizes, or service-specific scaling methods. For more information, see Services that support scaling.

You use the scaleConfig setting in the service CR to set the scaling configuration for a service. For services that don't support scaling, you must set the scaleConfig to NIL.

Before you scale up a service, ensure that your cluster can support the additional workload. If necessary, contact your IBM Support representative.

The relevant scale files must be in the config-vars/scale/<arch> directory. These YAML files specify the values for the different scaling configurations. The scaleConfig variable in the CR specifies the YAML file name without extension. Each service that supports scaling defines its own set of resources to handle various supported configurations and might include more configurations.

The following example shows a small configuration for the Cloud Pak for Data user management resources with x86_64 architecture:
Usermgmt:
  name: usermgmt
  kind: Deployment
  container: usermgmt-container
  replicas: 2
  resources:
    limits:
      cpu: 400m
      memory: 512Mi
    requests:
      cpu: 200m
      memory: 256Mi

Changing the scaling configuration of a service

To change the scaling configuration, add the scaleConfig variable to the service's custom resource (CR). For example, to change the configuration from the default small, to medium, you would add the following setting to the spec section of the CR:

...
spec:
   scaleConfig: "medium"
   ...

After the change to the CR, the scaling is updated during the next reconcile loop by an operator that is watching for the changes. For example, the configuration is changed from small to medium during the loop.

For services that don't support scaling, add NIL to the scaleConfig variable in the service's custom resource (CR), as shown in the following example:
...
spec:
   scaleConfig: "NIL"
   ...
The CR skips any scaling requests.

Services that support scaling

The following table shows which services support scaling. More information is provided for services that support scaling but use a default size other than small, support configurations other than small, medium, and large, or use service-specific scaling methods other than the scaleConfig setting.

Note: When you scale a service, you also must scale the related services individually. See Service software requirements to identify the service dependencies and then scale the services as necessary.
Service name Supports scaling More information
Cloud Pak for Data control plane  
Common core services  
Analytics Engine Powered by Apache Spark

Analytics Engine Powered by Apache Spark supports small, medium, and large configurations.

Cognos® Analytics Cognos Analytics supports four sizes:
  • small fixed (no scale)
  • small
  • medium
  • large
Cognos Dashboards  
Data Refinery  
Data Virtualization Data Virtualization does not use the spec.scaleConfig value in the custom resource for scaling. For information about the process for scaling Data Virtualization, see Scaling Data Virtualization.
DataStage® In addition to changing the spec.scaleConfig value in the DataStage CR, you can set sizes by using the size variable in the CR (oc edit datastageservice datastage-cr), which supports values small, medium, and large. Direct updates to the stateful set (sts) are reverted by the operator unless ignoreForMaintenance is set to true in the CR.
Db2®

For more information about scaling Db2, see Scaling up Db2.

Db2 Big SQL This service is scaled by using the Db2 Big SQL instance details page in the Cloud Pak for Data web interface.
Db2 Data Gate    
Db2 Data Management Console For information about scaling the Db2 Data Management Console service, see Scaling Db2 Data Management Console.
Db2 Event Store    
Db2 Warehouse

For more information about scaling Db2 Warehouse, see Scaling up Db2.

Decision Optimization Decision Optimization uses the scaling command to scale the service. When replicas are scaled, the CPUs and memory are also scaled.
  • 1 replica, 2.5 CPUs, 4 GB--config small
  • 2 replicas, 7 CPUs, 10 GB--config medium
  • 3 replicas, 10 CPUs, 14 GB--config large
EDB Postgres  
Execution Engine for Apache Hadoop    
Jupyter Notebooks with Python 3.7 for GPU   RStudio is a runtime that is started with the requested resources only. You cannot scale up the started runtime. Every runtime is user and project bound.
Jupyter Notebooks with R 3.6   RStudio is a runtime that is started with the requested resources only. You cannot scale up the started runtime. Every runtime is user and project bound.
IBM Match 360 with Watson™

Scaling also causes the CPU and memory limits to scale vertically on the supporting services that are included with the IBM Match 360 service, such as Elasticsearch.

MongoDB    
OpenPages®

See Scaling OpenPages for an alternative scaling method.

Planning Analytics  
Product Master

When replicas are scaled, the CPUs and memory are scaled according to the selection.

The service supports an extra large (xlarge) configuration.

RStudio® Server with R 3.6   RStudio is a runtime that is started with the requested resources only. You cannot scale up the started runtime. Every runtime is user and project bound.
SPSS® Modeler Edit the spec.scaleConfig value in the SPSS Modeler CR spss-sample to scale the operands to the size that you want to use (small, medium, or large).
Watson Knowledge Catalog To scale Watson Knowledge Catalog, first you modify the Watson Knowledge Catalog CR size setting (spec.size) to medium or large, and then you must separately scale your Db2u instances.

For more information, see Scaling up Db2.

Legacy services always use one replica.

Watson Machine Learning In addition to changing the spec.scaleConfig value in the Watson Machine Learning custom resource to small (default) or medium, you can edit the cloudctl utility to scale Watson Machine Learning.

Watson Machine Learning does not support large configurations.

Watson Machine Learning Accelerator  
Watson OpenScale Watson OpenScale supports small and medium configurations. For large configurations, work with your IBM representative.
Watson Studio