Disabling Q Replication removes the replication pod, container, and services from the Db2 Warehouse instance.
About this task
If you want to remove a Q Replication deployment, you should also examine all related Q
Replication deployments. For example, if you want to remove Q Replication for a source database, all
Q Replication target deployments should also be removed, unless a target deployment is a source for
another Q Replication deployment.
Procedure
-
Edit your Db2uCluster (for IBM Software Hub Versions prior to
5.2.0) or Db2uInstance custom resource (CR) to disable Q Replication deployment:
oc edit db2ucluster deployment-ID
oc edit db2uinstance deployment-ID
-
In the
addOns.qrep section of the CR, change qrep.enabled:
true to qrep.enabled: false.
-
In the
storage section of the CR, remove the replication storage
specification.
-
Save and close the CR.
The operator disables the replication component
-
Run the following commands to check the status of the replication deployment:
oc get deployment | grep qrep
oc get pod | grep qrep
Use the Db2 Warehouse instance ID to find a
matching replication deployment and pod name. Verify that the Db2 Warehouse instance is in Terminating state. The Q
Replication pod is then removed.
-
Exec into the Db2
Warehouse pod with catalog partition
and run
the
apply-db2cfg-setting script to restore the Db2 Warehouse registry variables and configuration parameters to their
pre-replication settings:
-
Clean up data under the shared file mount /mnt/qrepdata/*. Exec into the
Db2 Warehouse pod and run the following
commands:
oc exec -it db2u_pod bash
cd /mnt/qrepdata
sudo rm -rf *
-
Remove the replication global metadata table ASN.IBMQREP_RESTAPI_PROPERTIES by running these
commands:
db2 connect to bludb;
db2 "drop table asn.ibmqrep_restapi_properties";
db2 "drop table asn.ibmqrep_mcgparms";
db2 "drop table asn.ibmqrep_mcgsync";
db2 "drop function qasn.ibmqrep_rest_ver";
db2 terminate;
-
If you choose to delete the instance, you must delete the persistent volume for
qrepdata.
For
example:
oc get pv | grep qrep
pvc-815f85ff-e0fb-41bf-a881-54104f67f4c0 100Gi RWX Retain Bound zen/c-db2wh-1636403736070957-qrepdata managed-nfs-storage 2d2h
oc delete pv pvc-815f85ff-e0fb-41bf-a881-54104f67f4c0
-
If you used the qrep-expose-nodeports.sh script to expose the node ports
for replication, manually delete the node port entries from the
/etc/haproxy/haproxy.cfg file.
Each replication container would have entries for the node ports 50001, 1414, 1415, and 9444:
9444: db2wh-instance_id-qrep-rest-svc
50001: db2wh-instance_id-db2u-engn-svc
1414: db2wh-instance_id-qrep-mq-svc-sendq
1415: db2wh-instance_id-qrep-mq-svc-recvq
For example, for the instance ID 1635801061637006, you would see following entry in the
haproxy.cfg file for the replication REST service port 31723 (9444):
frontend db2wh-1635801061637006-qrep-rest-svc
bind *:31723
default_backend db2wh-1635801061637006-qrep-rest-svc
mode tcp
option tcplog
backend db2wh -1635801061637006-qrep-rest-svc
balance source
mode tcp
server master0 10.17.110.242:31723 check
Similar entries would be in the file for other ports for the same instance ID. You would delete
these entries, save the file, and run following commands:
ps aux | grep haproxy
kill -9 haproxy_pid
systemctl restart haproxy
systemctl reload haproxy