Known issues for Db2 and Db2 Warehouse

The following known issues apply to the Db2 and Db2 Warehouse services.

Db2 and Db2 Warehouse images return an error when mirroring

Applies to: 4.7.0 and 4.7.1

Fixed in: 4.7.2

Problem
Deployments might return an error when mirroring images.
Symptoms
The following error message appears:
unknown: OCI manifest found, but accept header does not support OCI manifests
Workaround
Upgrade to 4.7.2. OCI images have been replaced with Docker V2.2 images.

Databases page does not load and returns a 404 error after upgrade from 4.5.x to 4.7.x

Applies to: 4.7.0, 4.7.1, and 4.7.2

Fixed in: 4.7.3

Problem
If the custom resource for db2oltpService, db2whService, or db2aaserviceService is not set to a specific version, the databases-zen-extensions is created before Cloud Pak for Data is upgraded. When this occurs, the Databases page returns a 404 error.
Symptoms
  • The Databases page fails to open and returns a 404 error.
  • The following error message appears in the ibm-nginx pod:
    "/usr/local/openresty/nginx/html/zen-databases/index.html" is not found (2: No such file or directory)
Workaround
  1. Run the following command to patch your custom resource:
    oc patch zenextensions databases-zen-extensions \
    --namespace=${PROJECT_CPD_INST_OPERANDS} \
    --type=merge \
    --patch '{"spec": {"test": "test"}}'
  2. Run the following command to return the status of zenExtensions:
    oc get zenextensions databases-zen-extensions \
    --namespace=${PROJECT_CPD_INST_OPERANDS}
    Wait for the status to be Completed.

db2u-ssl-rotate job pods not starting

Applies to: 4.7.0

Fixed in: 4.7.1

Problem
Your db2u-ssl-rotate job pods fail to start when upgrading to Cloud Pak for Data version 4.7.0 due to the following error:
Warning  FailedCreate  4m36s (x209 over 13h)  job-controller  (combined from similar events): Error creating: pods "db2u-ssl-rotate-db2oltp-1687773258209790-2jv6f" is forbidden: failed quota: cpd-quota: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
Cause
The db2u-ssl-rotate job pods cannot start if:
  • A resourcequota is applied to the namespace, limiting CPU and memory resources.
  • A limitrange to set default values does not exist.
An error occurs because the job pods do not have limits.cpu, limits.memory, requests.cpu, or requests.memory parameters set.
Workaround
Apply a limitrange with default values for limits and requests. Modify the namespace in the following yaml to the namespace where Cloud Pak for Data is installed:
apiVersion: v1
kind: LimitRange
metadata:
  name: cpu-resource-limits
  namespace:  ${PROJECT_CPD_INST_OPERANDS}
spec:
  limits:
  - default:
      cpu: 300m
      memory: 200Mi
    defaultRequest:
      cpu: 200m
      memory: 200Mi
    type: Container

Unable to connect Q Replication pod to a database after backup and restore

Some users with a Q Replication deployment might struggle to connect to a database after performing a backup and restore.

Workaround
Users are required to download a script to deploy a fix. Follow the steps in Troubleshooting a Q Replication pod not connecting to a database after backup and restore to complete the workaround.

Db2 load utility failure after backup and restore procedure

Users might be unable to run the Db2 load utility after performing a backup and restore procedure, due to changes in user permissions. The following workaround will correct the ownership permission issues.

Workaround
  1. Run the exec command to access the head node of your instance:
    oc exec -it c-db2wh-1639609960917075-db2u-0 -- bash
  2. Run the following command to remove the load/admin directory:
    rm -r /mnt/blumeta0/db2/load/admin
  3. Run the following command to assign the correct owner of the Db2 load utility:
    sudo chown db2uadm:db2iadm1 /mnt/blumeta0/db2/load
  4. Run the following command to provide the Db2 load utility permission to write to that directory:
    sudo chmod g+s /mnt/blumeta0/db2/load

Connect statement on Db2 Warehouse MPP hangs after manage_snapshots --action suspend

After running the manage_snapshots --action suspend command to suspend Db2 write operations, the db2_all db2 connect to dbname or manage_snapshots --action resume commands might hang.

The manage_snapshots --action suspend and manage_snapshots --action resume commands can be executed explicitly while performing a snapshot backup with Db2 Warehouse container commands or as part of Backing up and restoring an entire deployment.

The db2_all db2 connect to dbname command is executed in the manage_snapshots --action resume script.

Symptoms

manage_snapshots --action resume command hangs at connect to BLUDB:

oc exec -it c-db2wh-crd-mpp-2x2-separate-db2u-0 -- manage_snapshots --action resume
Defaulted container "db2u" out of: db2u, init-labels (init), init-kernel (init)

connect to BLUDB
Workaround
  1. Locate the catalog node pod:
    oc get po -l name=dashmpp-head-0
  2. Run the exec command as the db2instance user to access an interactive shell inside of the container:
    oc exec -it c-db2wh-1639609960917075-db2u-0 -- su - db2inst1
  3. Issue a db2 connect command to the database.
    db2 connect to BLUDB
  4. If the command hangs, repeat steps 1-3 in another terminal. When the connect command is successful, issue the manage_snapshots --action resume command:
    manage_snapshots --action resume
  5. When this command completes, the other hanging connections should be resolved.

Issues when creating a Db2 connection with Cloud Pak for Data credentials

When you create a Db2 connection in the web console, an error can occur if you check the Cloud Pak for Data authentication box. To work around this issue, enter your Cloud Pak for Data credentials in the User name and Password fields, and do not check the Cloud Pak for Data authentication box.

Db2 post restore hook fails during restore operation 1

Symptoms
The backup log indicates the following message:
...
time=2022-06-06T11:00:28.035568Z level=info msg=   status: partially_succeeded
time=2022-06-06T11:00:28.035572Z level=info msg=   nOpResults: 70
time=2022-06-06T11:00:28.035585Z level=info msg=   postRestoreViaConfigHookRule on restoreconfig/analyticsengine-br in namespace wkc (status=succeeded)
time=2022-06-06T11:00:28.035589Z level=info msg=   postRestoreViaConfigHookRule on restoreconfig/lite in namespace wkc (status=succeeded)
time=2022-06-06T11:00:28.035593Z level=info msg=   postRestoreViaConfigHookRule on restoreconfig/db2u in namespace wkc (status=error)
...
time=2022-06-06T11:00:28.035601Z level=info msg=   postRestoreViaConfigHookJob on restoreconfig/wkc in namespace wkc (status=timedout)
...
Either db2u pod c-db2oltp-iis-db2u or c-db2oltp-wkc-db2u does not progress beyond:
....
+ db2licm_cmd=/mnt/blumeta0/home/db2inst1/sqllib/adm/db2licm
+ /mnt/blumeta0/home/db2inst1/sqllib/adm/db2licm -a /db2u/license/db2u-lic
Resolution

Delete the affected db2u pods and then check that the pods are up and running.

oc get pod | grep -E "c-db2oltp-iis-db2u|c-db2oltp-wkc-db2u"

Run the post restore hook again.

cpd-cli oadp restore posthooks --include-namespaces wkc --log-level=debug --verbose

Db2 post restore hook fails during restore operation 2

Symptoms
The restore log indicates the following message:
...
* ERROR: Database could not be activated
Failed to restart write resume and/or active database
...
Resolution

Delete the affected db2u pods and then check that the pods are up and running.

oc get pod | grep -E "c-db2oltp-iis-db2u|c-db2oltp-wkc-db2u"

Run the post restore hook again.

cpd-cli oadp restore posthooks --include-namespaces wkc --log-level=debug --verbose

The qrep-containers do not work for non-zen namespaces

Applies to: 4.7.0 and later

Problem
The qrep-containers are failing for namespaces other than zen.
Cause
The server.env variables DR_ENV and DR_ENV_DB2U are incorrectly being set to standalone because the QREP containers cannot detect Cloud Pak for Data deployments that do not use the default zen namespace.
Solution
For each QREP container, enter the following commands to manually update server.env variables DR_ENV and DR_ENV_DB2U to the correct environment type:
oc exec -it <qrep-container> bash
DR_ENV=DB2U-<database type>
sed -i --follow-symlinks '/DR_ENV/d' ${BLUDR_WLP_INSTALL_PATH}/wlp/usr/servers/bludr/server.env
echo "DR_ENV=$DR_ENV" >> ${BLUDR_WLP_INSTALL_PATH}/wlp/usr/servers/bludr/server.env
sed -i --follow-symlinks '/DR_ENV_DB2U/d' ${BLUDR_WLP_INSTALL_PATH}/wlp/usr/servers/bludr/server.env
echo "DR_ENV_DB2U=$DR_ENV" >> ${BLUDR_WLP_INSTALL_PATH}/wlp/usr/servers/bludr/server.env
sed -i --follow-symlinks '/LOCAL_DR_ENV_TYPE/d' ${BLUDR_WLP_INSTALL_PATH}/wlp/usr/servers/bludr/server.env
echo "LOCAL_DR_ENV_TYPE=$DR_ENV" >> ${BLUDR_WLP_INSTALL_PATH}/wlp/usr/servers/bludr/server.env
db2 connect to bludb user qrepdbadm using $(cat /secrets/qrepdbadmpwd/password )
db2 "update ASN.IBMQREP_RESTAPI_PROPERTIES SET PROP_VALUE_CHAR = 'DB2U-<database_type>' WHERE PROP_KEY='LOCAL_DR_ENV_TYPE'"
/opt/ibm/bludr/scripts/bin/bludr-restart.sh

where DB2U-<database_type> is your database type, for example, DB2U-DB2OLTP or DB2U-DB2WH.