Scaling up Db2 for IBM Knowledge Catalog

You can scale up the Db2 service on Cloud Pak for Data to support high-availability or increase processing capacity for IBM Knowledge Catalog services.

Scaling up the wkc-db2u instance

You can run a script to add more memory and CPU to the limited deployment of the Db2 service on Cloud Pak for Data.

Determine whether you should scale up Db2 by analyzing your monitoring tools (for example grafana graphs) for CPU and memory consumption for the Db2u container. If you observe high CPU throttling or if the container memory is at peak capacity, then you should scale up the instance.

Complete these steps:
  1. Specify the CPU and memory limit. In this example, CPU is set to 8 vCPU and memory is set to 15 Gi. Modify the values according to your needs.
    oc patch db2ucluster db2oltp-wkc --type=merge --patch '{"spec": {
    "podConfig": {
        "db2u": {
            "resource": {   
                "db2u": {
                    "limits": {
                        "cpu": "8",
                        "memory": "15Gi"
                    }
                }
            }
        }
    }
    }}'
  2. Wait for the c-db2oltp-wkc-db2u-0 pod to restart.
  3. Exec into the wkc-db2u pod:
    oc exec -it c-db2oltp-wkc-db2u-0 -- ksh
  4. Create the update_mem.sh script.
    cat <<EOF > /db2u/tmp/update_mem.sh
    #!/bin/bash
    source /etc/profile
    source /db2u/scripts/include/common_functions.sh
    source /db2u/scripts/include/db2_functions.sh
    
    # Get PID1 environ required for access OS envvar MEMORY_LIMIT
    export_pid1_env
    source /db2u/scripts/include/db2_memtune_functions.sh
    
    # Deactivate database and stop Db2
    db2 terminate
    db2 force applications all
    db2 deactivate db ILGDB
    db2 deactivate db WFDB
    db2 deactivate db BGDB
    db2 deactivate db LINEAGE
    db2stop force
    rah 'ipclean -a'
    
    # Disable remote connections
    db2set DB2COMM -null
    
    # Update the normalized instance_memory % value derived from higher MEMORY_LIMIT
    rm -f /mnt/blumeta0/SystemConfig/instancememory
    
    # db2start prior to activate db bludb and connection within db2_autoconf.clp
    db2start
    set_instance_memory
    db2stop
    db2start
    
    # Re-run autoconfigure to start using updated instance_memory %, and reapply any overrides
    run_autoconfigure
    apply_cfg_setting_to_db2 "-all"
    
    # Re-enable Db2 remote connections, start Db2 and activate db
    db2stop force
    rah 'ipclean -a'
    db2set DB2COMM=TCPIP,SSL
    db2start
    db2 activate db ILGDB
    db2 activate db WFDB
    db2 activate db BGDB
    db2 activate db LINEAGE
    EOF
  5. Set the execute permission on the update_mem.sh script:
    chmod +x /db2u/tmp/update_mem.sh
  6. Run the script:
    ./db2u/tmp/update_mem.sh

Updating the database configuration to match service scale-up

If you scale up services that depend on Db2 and thus put extra load on the database, also adjust the Db2 database configuration based on the actual load.

For example, in a large cluster, the transaction log settings for the lineage database must match the throughput of profiling in the context of metadata enrichment. To achieve a throughput of analyzing 80 columns per second, increase the values of several settings:
LOGFILSIZ
Defines the size of each primary and secondary log file in bytes. The size of a log files limits the number of log records that can be written to it.
LOGPRIMARY
Specify the number of primary log files that are preallocated, which results in a fixed amount of storage to be allocated to the recovery log files.
LOGSECOND
Specifies the number of secondary log files . Secondary log files are created only as needed, that is, when the number of primary log files isn't sufficient.

Complete these steps:

  1. Exec into the wkc-db2u pod:
    oc exec -it c-db2oltp-wkc-db2u-0 -- ksh
  2. Retrieve the current database configuration settings for the transaction logs:
    db2 get db cfg for lineage | grep LOG
    This sample response shows the default settings.
    Catalog cache size (4KB)              (CATALOGCACHE_SZ) = 383
    Log buffer size (4KB)                        (LOGBUFSZ) = 2150
    Log file size (4KB)                         (LOGFILSIZ) = 5000
    Number of primary log files                (LOGPRIMARY) = 5
    Number of secondary log files               (LOGSECOND) = 20
  3. For example, update the settings with the following values.
    • Set the log file size to 30000, which corresponds to about 29 KB.
    • Set the number of primary log files to 10.
    • Set the number of secondary log files to 40.

    Select the actual values based on your workload.

    The steps for updating the Db2 database configuration parameters are described in Changing Db2 configuration settings.

    The updated database configuration settings for the example look like in this snippet:

    Catalog cache size (4KB)              (CATALOGCACHE_SZ) = 567
    Log buffer size (4KB)                        (LOGBUFSZ) = 2160
    Log file size (4KB)                         (LOGFILSIZ) = 30000
    Number of primary log files                (LOGPRIMARY) = 10
    Number of secondary log files               (LOGSECOND) = 40