Table of contents

Scaling Data Virtualization

You can scale the Data Virtualization service on Cloud Pak for Data to support high-availability or increase processing capacity.

You can modify the size, cpu, and memory settings in the service CR to scale Data Virtualization. Data Virtualization does not support the scaleConfig setting in the CR.

You can run oc edit bigsql db2u-dv to modify the following resource settings:

Number of workers
To change the number of worker pods, change the size field. The number indicates the number of worker pods but does not include the head pod. For example, size:2 indicates one head and two worker pods. To change to three worker pods, change this field to size: 3.
CPU and memory requests
To change the CPU and total memory of requests that the head and each worker pod start with, change the values for the requests field. For example, cpu: 8 , memory: 16Gi indicates that head and worker pods start with 8 CPU and 16Gi memory.
CPU and memory limits
To change the CPU and total memory of limits that the head and each worker pod can grow to before Kubernetes starts to throttle (exceeds CPU limit) or kill (exceeds memory limit) pods, change the values for the limits field. For example, cpu: 8, memory: 16Gi indicates that head and worker pods are throttled if they use more than 8 CPU. They are killed if they use more than 16Gi memory.

You can see these settings in the spec section of the CR as shown in the following example:

​​​​​​​apiVersion: db2u.databases.ibm.com/v1alpha1
kind: BigSQL
metadata:
  name: db2u-dv
spec:
  mode: Dv
  version: "1.7.0"
  size: 2
  db2uClusterSpec:
    podConfig: 
      db2u:
        resource:
          db2u:
            requests:
              cpu: 8
              memory: 16Gi
            limits:
              cpu: 8
              memory: 16Gi

By default, Data Virtualization is provisioned with one worker, 4 CPU, and 16Gi memory.