Verifying backup orchestration data
To verify that the backup orchestration data is good, first you must restore the PostgreSQL cluster. Then check the status of the restored cluster and check for errors.
Procedure
- Identify the object for the backup that you want to restore. Run the following command to
see the backups in your namespace.
oc get backups -n <cp4na_namespace>
<cp4na_namespace> is the namespace where IBM® Cloud Pak for Network Automation is installed.
-
Copy the following YAML format data into a YAML file, such as
cp4na-postgres-verify.yaml.
apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: cp4na-o-postgresql-verify spec: bootstrap: initdb: database: app owner: app description: PostgreSQL cluster instance for orchestration instance imageName: <image_name> instances: 1 postgresql: parameters: max_connections: <max_connections_value> shared_buffers: <shared_buffers_value> primaryUpdateStrategy: unsupervised resources: limits: cpu: <limits_cpu_value> memory: <limits_memory_value> requests: cpu: <requests_cpu_value> memory: <requests_memory_value> storage: size: <size_value> bootstrap: recovery: backup: name: <backup_name> recoveryTarget: targetTime: <backup_completion_time> # Format: yyyy-mm-dd hh:mm:ss
- Update the keys in the YAML file with the values that your IBM Cloud Pak for Network
Automation installation uses. To find out what
those values are, use the following commands:
imageName
-
oc get orchestration <instance_name> -o jsonpath='{.spec.advanced.postgres.image}' -n <cp4na_namespace>
postgresql.parameters.max_connections
-
oc get orchestration <instance_name> -o jsonpath='{.spec.advanced.postgres.parameters.maxConnections}' -n <cp4na_namespace>
postgresql.parameters.shared_buffers
-
oc get orchestration <instance_name> -o jsonpath='{.spec.advanced.postgres.parameters.sharedBuffers}' -n <cp4na_namespace>
resources.limits.cpu
-
oc get orchestration <instance_name> -o jsonpath='{.spec.advanced.postgres.resources.limits.cpu}' -n <cp4na_namespace>
resources.limits.memory
-
oc get orchestration <instance_name> -o jsonpath='{.spec.advanced.postgres.resources.limits.memory}' -n <cp4na_namespace>
resources.requests.cpu
-
oc get orchestration <instance_name> -o jsonpath='{.spec.advanced.postgres.resources.requests.cpu}' -n <cp4na_namespace>
resources.requests.memory
-
oc get orchestration <instance_name> -o jsonpath='{.spec.advanced.postgres.resources.requests.memory}' -n <cp4na_namespace>
storage.size
-
oc get orchestration <instance_name> -o jsonpath='{.spec.storage.postgres.storageSize}' -n <cp4na_namespace>
storage.storageClass
-
oc get orchestration <instance_name> -o jsonpath='{.spec.storage.postgres.storageClassName}' -n <cp4na_namespace>
The following example CR shows the configuration settings and their default values:apiVersion: postgresql.k8s.enterprisedb.io/v1 kind: Cluster metadata: name: cp4na-o-postgresql-verify spec: bootstrap: initdb: database: app owner: app description: Postgres cluster instance for orchestration instance default imageName: cp.icr.io/cp/cpd/postgresql:13.4@sha256:ebcfb1ead809f7cc327290e97eec572958891a928960edbab947338e6d0f7827 instances: 1 postgresql: parameters: max_connections: "300" shared_buffers: 128MB primaryUpdateStrategy: unsupervised resources: limits: cpu: "4" memory: 6Gi requests: cpu: "0.142" memory: 1Gi storage: size: 250Gi bootstrap: recovery: backup: name: cp4na-fullbackup-immediate-20220107102105 #recoveryTarget: # targetTime: ""
- Update the following backup keys in the YAML file:
Key Update bootstrap.recovery.backup.name
Add the PostgreSQL backup object that you want to restore. Add the PostgreSQL backup object, not the orchestration backup object. Run the following command to find the name of the PostgreSQL backup object:oc get backup
bootstrap.recovery.backup.recoveryTarget.targetTime
To recover the latest available WAL recovery point, remove this key. To recover a backup from a specific time, enter the time in this format:
yyyy-mm-dd hh:dd:ss
- Restore the cluster by applying the YAML content that you created in
steps 2 to 4. To apply the YAML
file, run a command similar to the following example:
oc apply -f cp4na-postgres-verify.yaml -n <cp4na_namespace>
- Wait until the PostgreSQL cluster is restored. You can check the
progress of the restore by using the following command:
oc get cluster cp4na-o-postgresql-verify
- When the PostgreSQL cluster is restored successfully, a pod instance
is running that corresponds to the cluster instance name. The status of the cluster is
Cluster in healthy state
. The following sample output shows a restored cluster.NAME AGE INSTANCES READY STATUS PRIMARY cp4na-o-postgresql-verify 3h53m 1 1 Cluster in healthy state cp4na-o-postgresql-verify-1
If the restore process fails, you can use the following commands to find more details:oc get pod -n <cp4na_namespace> | grep cp4na-o-postgresql-verify oc logs <pod_name> -n <cp4na_namespace>
- Run the following command to verify the restored cluster:
oc describe cluster cp4na-o-postgresql-verify -n <cp4na_namespace>
If no errors are displayed, the cluster is successfully restored. If errors are displayed, you can use commands similar to the following command to review the log of each pod:oc logs cp4na-o-postgresql-verify-1 -n <cp4na_namespace>
To verify that PVCs are created and check the status of each PVC, run the following command:oc get pvc cp4na-o-postgresql-verify-1 -n <cp4na_namespace>
What to do next
You can delete the restored cluster because you created it for testing purposes only. For example, you can run a command like this:
oc delete cluster cp4na-o-postgresql-verify -n <cp4na_namespace>