Backing up your environments
It is important to back up your data so that you can resume work as quickly and effectively as possible.
Before you begin
For all mentions of icp4adeploy
on this page, replace it with the value
you set for metadata.name
in your IBM Cloud Pak® for Business Automation custom resource (CR)
file.
- Optional: If you are using IBM Business Automation Studio, export your data. You cannot export your data after your environment is stopped.
- You can scale down all your environment pods to 0 by running the following
commands:
oc scale deploy ibm-cp4a-operator --replicas=0 oc scale deploy ibm-pfs-operator --replicas=0 oc scale deploy ibm-content-operator --replicas=0 for i in 'oc get deploy -o name |grep icp4adeploy'; do oc scale $i --replicas=0; done for i in 'oc get sts -o name |grep icp4adeploy'; do oc scale $i --replicas=0; done oc scale sts zen-metastoredb --replicas=0
About this task
cert-manager
to set up the TLS key and certificate secrets.Use the following steps to back up IBM Cloud Pak for Business Automation in a multiple-zone environment.
Procedure
- Make copies of the Cloud Pak custom resource (CR) files that are used in the primary and secondary environments. The custom resource (CR) file for a secondary environment has a different hostname from the primary environment.
-
Back up the security definitions in the following table. For more information, see Creating secrets to
protect sensitive configuration data.
Table 1. Secrets to back up Secrets Example secret name IBM Cloud Pak for Business Automation secrets icp4adeploy-cpe-oidc-secret
admin-user-details
Image pull secret. Not present in an airgap environment. ibm-entitlement-key
Lightweight Directory Access Protocol (LDAP) secret ldap-bind-secret
LDAP SSL certificate secret. Required if you enabled SSL connection for LDAP. You must also back up the certificate file. ldap-ssl-cert
Database SSL certificate secret. Required if you enabled SSL connection for the database. You must also back up the certificate file. For examples of secret names, see Preparing the databases. If you are using Db2, an example would be: ibm-dba-db2-cacert
Shared encryption key secret ibm-iaws-shared-key-secret
IBM Business Automation Workflow secret ibm-baw-wfs-server-db-secret
Process Federation Server admin secret ibm-pfs-admin-secret
IBM Business Automation Application Engine secrets icp4adeploy-workspace-aae-app-engine-admin-secret
Resource Registry secret icp4adeploy-rr-admin-secret
IBM Business Automation Navigator secret ibm-ban-secret
IBM FileNet® Content Manager secret ibm-fncm-secret
IBM Business Automation Studio secret icp4adeploy-bas-admin-secret
Playback Application Engine secret playback-server-admin-secret
IBM Workflow Process Service Runtime admin secret <cr_name>-wfps-admin-secret
- Make copies of the security definitions that are used to protect the configuration data in the primary and secondary environments. Get the WLP_CLIENT_ID and WLP_CLIENT_SECRET from ibm-common-services/platform-oidc-credentials secret on the backup cluster and keep them for later usage.
- Back up your PVC definitions and PV definitions depending on your type of
provisioning:
- If you are using static provisioning, back up your PVC definitions, PV definitions, and the content in the PV.
- If you are using dynamic provisioning, the PV and PVC definitions are created by the operator
automatically, so you need to back up the PVC definition and the content in the PV. To back up the
PVC definitions, get each definition and modify the format so that the PVC can be deployed. You can
use a script similar to the following:
#!/bin/sh NS=ibm-cp4ba pvcbackup() { oc get pvc -n $NS --no-headers=true | while read each do pvc=`echo $each | awk '{ print $1 }'` echo "---" >> pvc.yaml kubectl get pvc $pvc -o yaml \ | yq eval 'del(.status, .metadata.finalizers, .metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields, .metadata.ownerReferences, .spec.volumeMode, .spec.volumeName)' - >> pvc.yaml done } pvcbackup
When you restore the environment, you can create the PVC by using the backup definition and by copying the content to the corresponding PV.
Do not back up the following PVC definitions. If you back up these definitions, you might encounter an error.Table 2. PVC definitions to back up Component Custom resource template persistent volume claim name Description Needs to be backed up or replicated IBM Business Automation Navigator icn-asperastore
IBM Business Automation Navigator storage for Aspera. No icn-cfgstore
Business Automation Navigator Liberty configuration. Yes icn-logstore
Liberty and Business Automation Navigator logs. Multiple IBM Content Navigator pods write logs here. No icn-pluginstore
Business Automation Navigator custom plug-ins. No icn-vw-cachestore
Business Automation Navigator storage for the Daeja ViewONE cache. No icn-vw-logstore
Business Automation Navigator viewer logs for the Daeja ViewONE. No Cloud Pak Platform UI datadir-zen-metastoredb-0
datadir-zen-metastoredb-1
datadir-zen-metastoredb-2
Database files for Cloud Pak Platform UI metastoredb
.Yes data-iaf-system-elasticsearch-es-data-0
iaf-system-elasticsearch-es-snap-main-pvc
ibm-bts-cnpg-bawent-cp4ba-bts-1
user-home-pvc
- Back up all the content in the PVs. You can choose which files to restore on your
environment later. The generated folder names for dynamically provisioned PVs are not static. For
example, the folder name might look similar to
bawent-cmis-cfgstore-pvc-ctnrs-pvc-e5241e0c-3811-4c0d-8d0f-cb66dd67f672
. The folder name is different for each deployment, so you must use a mapping folder to back up the content. The following script can be used to create backups of your PVs:#!/bin/sh NS=bawent SOURCE_DIR=/home/pv/2103 BACKUP_DIR=/home/backup pvbackup() { oc get pvc -n $NS --no-headers=true | while read each do pvc=`echo $each | awk '{ print $1 }'` pv=`echo $each | awk '{ print $3 }'` if [ -d "$SOURCE_DIR/$NS-$pvc-$pv" ] then echo "copying pv $pv " mkdir -p $BACKUP_DIR/$pvc cp -r -a $SOURCE_DIR/$NS-$pvc-$pv/. $BACKUP_DIR/$pvc echo "" else echo "NOT FOUND for $pvc" fi done } pvbackup
- If you are using IBM Workflow Process Service, make sure to
back up PVCs prefixed with
datasave
.- Use the
kuberenetes
command to get its definition and back up the necessary parts in this definition. - Back up files under folder
<the_folder_for_datasave_PV>/messaging
and keep theuser:group
information.
- Use the
- Make copies of the following files:
- JDBC drivers depending on your database type. For more information, see the following links:
- Customized files that you put in the component's PV for runtime. For example, customized font files.
- The configuration files that you use to set up your persistent storage, and your database server.
- If you have a database, back up the secure definition that is used to store the database username and password, and the configuration files that you used to set up your database server.
- If you have a database, back up the data in your database by using your preferred method.
The following table shows databases that need to be backed up.
Table 3. Databases that need to be backed up for each capability Capability Databases that need to be backed up IBM Automation Decision Services MongoDB databases that you are using for the decision designer or the decision runtime. IBM Automation Document Processing - Engine base database
- Engine tenant databases
IBM Automation Workstream Services The Db2®, Oracle, PostgreSQL, or database that you are using. IBM Business Automation Workflow The Db2, Oracle, PostgreSQL, or database that you are using. IBM FileNet Content Manager The databases for the Global Configuration Database and your object store. IBM Operational Decision Manager - Decision Center database
- Decision Server database
Database information can be found under the section
datasource_configuration
of the custom resource file.IBM Workflow Process Service Authoring The default EDB PostgreSQL, or your own PostgreSQL database. IBM Workflow Process Service Runtime Your embedded or external PostgreSQL database. For PostgreSQL: To configure backup and recovery, see Backup and Recovery.
For MongoDB: To run backup operations, install MongoDB Database Tools.
If you are using Db2, you can complete an online or offline backup by completing the following steps.Run the following commands to complete an offline backup. If you want to do an online backup, you must also complete this step.mkdir -p /home/db2inst1/backup/2103 db2 backup db UMSDB to /home/db2inst1/backup/2103 WITH 2 BUFFERS BUFFER 1024 db2 backup db TOSDB to /home/db2inst1/backup/2103 WITH 2 BUFFERS BUFFER 1024 db2 backup db GCDDB to /home/db2inst1/backup/2103 WITH 2 BUFFERS BUFFER 1024 db2 backup db AEDB to /home/db2inst1/backup/2103 WITH 2 BUFFERS BUFFER 1024 db2 backup db ICNDB to /home/db2inst1/backup/2103 WITH 2 BUFFERS BUFFER 1024 db2 backup db BAWDB to /home/db2inst1/backup/2103 WITH 2 BUFFERS BUFFER 1024 db2 backup db DOCSDB to /home/db2inst1/backup/2103 WITH 2 BUFFERS BUFFER 1024 db2 backup db DOSDB to /home/db2inst1/backup/2103 WITH 2 BUFFERS BUFFER 1024
If you want an online backup, complete the following steps.
- Enable archival logging for each database in the environment. You can also configure
the interval between each backup. For example:
mkdir -p /home/db2inst1/archive/UMSDB db2 update db cfg for UMSDB using LOGINDEXBUILD on db2 update db cfg for UMSDB using LOGARCHMETH1 disk:/home/db2inst1/archive/UMSDB mkdir -p /home/db2inst1/archive/TOSDB db2 update db cfg for TOSDB using LOGINDEXBUILD on db2 update db cfg for TOSDB using LOGARCHMETH1 disk:/home/db2inst1/archive/TOSDB mkdir -p /home/db2inst1/archive/GCDDB db2 update db cfg for GCDDB using LOGINDEXBUILD on db2 update db cfg for GCDDB using LOGARCHMETH1 disk:/home/db2inst1/archive/GCDDB mkdir -p /home/db2inst1/archive/AEDB db2 update db cfg for AEDB using LOGINDEXBUILD on db2 update db cfg for AEDB using LOGARCHMETH1 disk:/home/db2inst1/archive/AEDB mkdir -p /home/db2inst1/archive/ICNDB db2 update db cfg for ICNDB using LOGINDEXBUILD on db2 update db cfg for ICNDB using LOGARCHMETH1 disk:/home/db2inst1/archive/ICNDB mkdir -p /home/db2inst1/archive/BAWDB db2 update db cfg for BAWDB using LOGINDEXBUILD on db2 update db cfg for BAWDB using LOGARCHMETH1 disk:/home/db2inst1/archive/BAWDB mkdir -p /home/db2inst1/archive/DOCSDB db2 update db cfg for DOCSDB using LOGINDEXBUILD on db2 update db cfg for DOCSDB using LOGARCHMETH1 disk:/home/db2inst1/archive/DOCSDB mkdir -p /home/db2inst1/archive/DOSDB db2 update db cfg for DOSDB using LOGINDEXBUILD on db2 update db cfg for DOSDB using LOGARCHMETH1 disk:/home/db2inst1/archive/DOSDB
- Terminate your database connections to prevent errors while backing up:
db2 force applications all
- Complete the online backup, by running the following commands:
mkdir -p /home/db2inst1/backup/2103/online db2 backup db UMSDB online to /home/db2inst1/backup/2103/online db2 backup db TOSDB online to /home/db2inst1/backup/2103/online db2 backup db GCDDB online to /home/db2inst1/backup/2103/online db2 backup db AEDB online to /home/db2inst1/backup/2103/online db2 backup db ICNDB online to /home/db2inst1/backup/2103/online db2 backup db BAWDB online to /home/db2inst1/backup/2103/online db2 backup db DOCSDB online to /home/db2inst1/backup/2103/online db2 backup db DOSDB online to /home/db2inst1/backup/2103/online
-
If you use Business Automation Insights, back up the
data. Business Automation Insights stores data in two different places.
- Elasticsearch contains the time series and the summaries.
For more information, see Taking and restoring snapshots of Elasticsearch data.
- Flink contains the state of event processing.
For more information, see Restarting from a checkpoint or savepoint.
- Elasticsearch contains the time series and the summaries.
- If necessary, back up the Lightweight Directory Access Protocol (LDAP) files. Different
types of LDAP servers have different backup methods. Make sure that the restored data in the LDAP
database is the same as the source LDAP. For IBM Security Directory Server see IBM Security Directory Server backup and restore.
- To back up Business Teams Service (BTS), see Backing up and restoring.
- Complete the backup procedures for the following components that you configured in your
environment.
- IBM Automation Decision Services: Backing up Automation Decision Services
- IBM FileNet Content Manager: Backup and recovery of a container environment
- IBM Business Automation Navigator: Backing up IBM® Content Navigator
What to do next
for i in 'oc get deploy -o name |grep icp4adeploy'; do echo " start $i" ; oc scale $i --replicas=1; done
for i in 'oc get sts -o name |grep icp4adeploy'; do echo " start $i" ; oc scale $i --replicas=1; done
echo " start operators ..."
oc scale deploy ibm-cp4a-operator --replicas=1
oc scale deploy ibm-pfs-operator --replicas=1
oc scale deploy ibm-content-operator --replicas=1