Important: IBM Cloud Pak for Data
Version 4.7 will reach end of support (EOS) on 31 July, 2025. For more information, see the
Discontinuance of service announcement for
IBM Cloud Pak for Data Version 4.X.
Upgrade to IBM Software Hub Version
5.1 before IBM Cloud Pak for Data Version 4.7 reaches end of
support. For more information, see Upgrading IBM Software Hub in the IBM Software Hub Version 5.1
documentation.
Disabling replication removes the replication pod, container, and services from the
OpenShift® cluster.
About this task
If you want to remove a replication deployment, you should also examine all related replication
deployments. For example, if you want to remove replication for a source database, all replication
target deployments should also be removed, unless a target deployment is a source of another
replication deployment.
Procedure
-
Locate the Db2
Warehouse custom resource for the replication deployment that you want to
diable, and put the custom resource into edit mode:
oc edit db2ucluster deployment-ID
-
In the
addOns.qrep section of the CR, change qrep.enabled:
true to qrep.enabled: false.
-
In the
storage section of the CR, remove the replication storage
specification.
-
Save and close the CR.
The operator disables the replication component
-
Run the following commands to check the status of the replication deployment:
oc get deployment | grep qrep
oc get pod | grep qrep
Use the Db2
Warehouse instance ID to find a matching replication deployment and pod name.
Verify that the Db2
Warehouse cluster is in Terminating state. The
replication pod is then removed.
-
Exec into the Db2
Warehouse pod with catalog partition and run the
apply-db2cfg-setting
script to restore the Db2
Warehouse registry variables and configuration parameters to their
pre-replication settings:
-
Clean up data under the shared file mount /mnt/qrepdata/*. Exec into the
Db2
Warehouse pod and run the following commands:
oc exec -it db2u_pod bash
cd /mnt/qrepdata
sudo rm -rf *
-
Remove the replication global metadata table ASN.IBMQREP_RESTAPI_PROPERTIES by running these
commands:
db2 connect to bludb;
db2 "drop table asn.ibmqrep_restapi_properties";
db2 terminate;
-
If you choose to delete the Db2
Warehouse cluster, you must delete the persistent volume
for qrepdata.
For
example:
oc get pv | grep qrep
pvc-815f85ff-e0fb-41bf-a881-54104f67f4c0 100Gi RWX Retain Bound zen/c-db2wh-1636403736070957-qrepdata managed-nfs-storage 2d2h
oc delete pv pvc-815f85ff-e0fb-41bf-a881-54104f67f4c0
-
If you used the qrep-expose-nodeports.sh script to expose the node ports
for replication, manually delete the node port entries from the
/etc/haproxy/haproxy.cfg file.
Each replication container would have entries for the node ports 50001, 1414, 1415, and 9444:
9444: db2wh-instance_id-qrep-rest-svc
50001: db2wh-instance_id-db2u-engn-svc
1414: db2wh-instance_id-qrep-mq-svc-sendq
1415: db2wh-instance_id-qrep-mq-svc-recvq
For example, for the instance ID 1635801061637006, you would see following entry in the
haproxy.cfg file for the replication REST service port 31723 (9444):
frontend db2wh-1635801061637006-qrep-rest-svc
bind *:31723
default_backend db2wh-1635801061637006-qrep-rest-svc
mode tcp
option tcplog
backend db2wh-1635801061637006-qrep-rest-svc
balance source
mode tcp
server master0 10.17.110.242:31723 check
Similar entries would be in the file for other ports for the same instance ID. You would delete
these entries, save the file, and run following commands:
ps aux | grep haproxy
kill -9 haproxy_pid
systemctl restart haproxy
systemctl reload haproxy