Backing up your environments

It is important to back up your data so that you can resume work as quickly and effectively as possible.

Before you begin

Note: The IBM Cloud Pak® for Business Automation disaster recovery solution requires backing up and restoring both IBM Cloud Pak for Business Automation components and IBM Cloud Pak foundational services. The IBM Cloud Pak for Business Automation disaster recovery documentation only provides backup and restore instructions for IBM Cloud Pak for Business Automation components. For instructions to back up and restore IBM Cloud Pak foundational services, contact the IBM Cloud Pak foundational services team.

For all mentions of icp4adeploy on this page, replace it with the value you set for metadata.name in your IBM Cloud Pak for Business Automation custom resource (CR) file.

Before you start to back up your environment, you will need to stop your environment to prevent changes in your persistent volumes (PVs) and database. If you do not stop your environment, your PV data and databases might not be backed up properly.
  1. Optional: If you are using IBM Business Automation Studio, export your data. You cannot export your data after your environment is stopped.
  2. You can scale down all your environment pods to 0 by running the following commands:
    oc scale deploy ibm-cp4a-operator --replicas=0
    oc scale deploy ibm-pfs-operator  --replicas=0
    oc scale deploy ibm-content-operator  --replicas=0
    for i in `oc get deploy -o name |grep icp4adeploy`; do oc scale $i --replicas=0; done
    for i in `oc get sts -o name |grep icp4adeploy`; do oc scale $i --replicas=0; done

About this task

Tip: Back up each environment in your multiple-zone clusters regularly. The shorter the time in between two backups the less data you can potentially lose. Configure the cert-manager to set up the TLS key and certificate secrets.

Use the following steps to back up IBM Cloud Pak for Business Automation in a multiple-zone environment.

Procedure

  1. Make copies of the Cloud Pak custom resource (CR) files that are used in the primary and secondary environments. The custom resource (CR) file for a secondary environment has a different hostname from the primary environment.
  2. Back up the security definitions in the following table. For more information, see Creating secrets to protect sensitive configuration data.
    Table 1. Secrets to back up
    Secrets Example secret name
    IBM Cloud Pak for Business Automation secrets
    1. icp4adeploy-cpe-oidc-secret
    2. admin-user-details
    Image pull secret. Not present in an airgap environment. ibm-entitlement-key
    Lightweight Directory Access Protocol (LDAP) secret ldap-bind-secret
    LDAP SSL certificate secret. Required if you enabled SSL connection for LDAP. You must also back up the certificate file. ldap-ssl-cert
    Database SSL certificate secret. Required if you enabled SSL connection for the database. You must also back up the certificate file. For examples of secret names, see Preparing the databases. If you are using Db2, an example would be: ibm-dba-db2-cacert
    Shared encryption key secret ibm-iaws-shared-key-secret
    IBM Business Automation Workflow secret ibm-baw-wfs-server-db-secret
    Process Federation Server admin secret ibm-pfs-admin-secret
    IBM Business Automation Application secrets ibm-aae-app-engine-secret/icp4adeploy-workspace-aae-app-engine-admin-secret
    Resource Registry secret icp4adeploy-rr-admin-secret
    Database credential for Document Processing. ibm-aca-db-secret
    CP4BA database SSL secret. Required if you enabled SSL connection for the CP4BA database. ibm-cp4ba-db-ssl-secret-for-<dbServerAlias>
    The Automation Document Processing secret is configured in preparation for use with document processing. ibm-adp-secret
    IBM Business Automation Navigator secret ibm-ban-secret
    IBM FileNet® Content Manager secret ibm-fncm-secret
    IBM Business Automation Studio secret ibm-bas-admin-secret
    Application Engine playback server secret ibm-playback-server-admin-secret
    IBM Workflow Process Service Runtime admin secret <cr_name>-wfps-admin-secret
  3. Back up your PVC definitions and PV definitions depending on your type of provisioning:
    • If you are using static provisioning, back up your PVC definitions, PV definitions, and the content in the PV.
    • If you are using dynamic provisioning, the PV and PVC definitions are created by the operator automatically, so you need to back up the PVC definition and the content in the PV. To back up the PVC definitions, get each definition and modify the format so that the PVC can be deployed. The following sample script gets all the PV and PVC definitions. Reference the list of PVC definitions related to their capabilities and remove the ones that you don't need.
      #!/bin/sh 
       
      NS=ibm-cp4ba 
      pvcbackup() { 
      oc get pvc -n $NS --no-headers=true | while read each
      do
          pvc=`echo $each | awk '{ print $1 }'`
          echo "---" >> pvc.yaml    
          kubectl get pvc $pvc -o yaml \
            | yq eval 'del(.status, .metadata.finalizers, .metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields, .metadata.ownerReferences, .spec.volumeMode, .spec.volumeName)' -  >> pvc.yaml
          
      done
      } 
      pvcbackup  

      If you are using an NFS-based storage, you can back up the content in the PV by following the instructions in step 4. When you restore the environment, you can create the PVC by using the backup definition and by copying the content to the corresponding PV.

    The following table lists PVC definitions that you must back up and restore (see the Needs to be backed up or replicated column) if you have them, and others that you might also choose to back up.
    Table 2. PVC definitions to back up
    Component Custom resource template persistent volume claim name Description Needs to be backed up or replicated
    IBM Business Automation Navigator icn-asperastore IBM Business Automation Navigator storage for Aspera. No
      icn-cfgstore Business Automation Navigator Liberty configuration. Yes
      icn-logstore Liberty and Business Automation Navigator logs. Multiple IBM Content Navigator pods write logs here. No
      icn-pluginstore Business Automation Navigator custom plug-ins. No
      icn-vw-cachestore Business Automation Navigator storage for the Daeja ViewONE cache. No
      icn-vw-logstore Business Automation Navigator viewer logs for the Daeja ViewONE. No
    Do not back up the following PVC definitions. If you back up these definitions, you might encounter an error.
    • data-iaf-system-elasticsearch-es-data-0
    • iaf-system-elasticsearch-es-snap-main-pvc
    • ibm-bts-cnpg-bawent-cp4ba-bts-1
    • user-home-pvc
    Depending on the capabilities that you are using, you must back up more PVC definitions. See the following links:
  4. If you are using a NFS-based storage, back up all the content in the PVs. You can choose which files to restore on your environment later. The generated folder names for dynamically provisioned PVs are not static. For example, the folder name might look similar to bawent-cmis-cfgstore-pvc-ctnrs-pvc-e5241e0c-3811-4c0d-8d0f-cb66dd67f672. The folder name is different for each deployment, so you must use a mapping folder to back up the content. The following script can be used to create backups of your PVs:
    #!/bin/sh 
     
    NS=bawent 
    SOURCE_DIR=/home/pv/2301 
    BACKUP_DIR=/home/backup 
     
    pvbackup() { 
        oc get pvc -n $NS --no-headers=true | while read each 
        do 
            pvc=`echo $each | awk '{ print $1 }'` 
        	pv=`echo $each | awk '{ print $3 }'` 
        	 
            if [  -d "$SOURCE_DIR/$NS-$pvc-$pv" ] 
        	then 
                echo "copying pv $pv " 
        		mkdir -p $BACKUP_DIR/$pvc 
        		cp -r -a $SOURCE_DIR/$NS-$pvc-$pv/.  $BACKUP_DIR/$pvc 
                echo "" 
            else 
                echo "NOT FOUND for $pvc" 
            fi 
        done 
    } 
     
    pvbackup 
  5. If you are using IBM Workflow Process Service, make sure to back up PVCs prefixed with datasave.
    1. Use the kuberenetes command to get its definition and back up the necessary parts in this definition.
    2. Back up files under folder <the_folder_for_datasave_PV>/messaging and keep the user:group information.
  6. Make copies of the following files:
  7. If you have a database, back up the secure definition that is used to store the database username and password, and the configuration files that you used to set up your database server.
  8. If you have a database, back up the data in your database by using your preferred method. The following table shows databases that need to be backed up.
    Table 3. Databases that need to be backed up for each capability
    Capability Databases that need to be backed up
    IBM Automation Decision Services MongoDB databases that you are using for the decision designer or the decision runtime.
    IBM Automation Document Processing
    • Engine base database
    • Engine tenant databases
    IBM Automation Workstream Services The Db2®, Oracle, PostgreSQL, or SQL Server database that you are using.
    IBM Business Automation Workflow The Db2, Oracle, PostgreSQL, or SQL Server database that you are using.
    IBM FileNet Content Manager The databases for the Global Configuration Database and your object store.
    IBM Operational Decision Manager
    • Decision Center database
    • Decision Server database

    Database information can be found under the section datasource_configuration of the custom resource file.

    IBM Workflow Process Service Authoring The default EDB PostgreSQL, or your own PostgreSQL database.
    IBM Workflow Process Service Runtime Your embedded or external PostgreSQL database.

    To configure backup and recovery for PostgreSQL, see Backup and Recovery.

    If you are using Db2, you can complete an online or offline backup by completing the following steps.
    Run the following commands to complete an offline backup. If you want to do an online backup, you must also complete this step. For example,
    mkdir -p /home/db2inst1/backup/2301
    db2 backup db  TOSDB  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024 
    db2 backup db  GCDDB  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024 
    db2 backup db  AAEDB  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024 
    db2 backup db  ICNDB  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024 
    db2 backup db  BAWDB  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024 
    db2 backup db  DOCSDB  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024 
    db2 backup db  DOSDB  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024
    db2 backup db  BASDB  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024
    db2 backup db  APPDB  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024
    db2 backup db  ADPBASE  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024
    db2 backup db  PROJ1  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024
    db2 backup db  DEVOS1  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024
    db2 backup db  AEOS  to /home/db2inst1/backup/2301  WITH 2 BUFFERS BUFFER 1024

    If you want an online backup, complete the following steps.

    1. Enable archival logging for each database in the environment. You can also configure the interval between each backup. For example,
      mkdir  -p   /home/db2inst1/archive/TOSDB 
      db2 update db cfg for TOSDB using LOGINDEXBUILD  on 
      db2 update db cfg for TOSDB using LOGARCHMETH1   disk:/home/db2inst1/archive/TOSDB   
       
      mkdir  -p   /home/db2inst1/archive/GCDDB 
      db2 update db cfg for GCDDB using LOGINDEXBUILD  on 
      db2 update db cfg for GCDDB using LOGARCHMETH1   disk:/home/db2inst1/archive/GCDDB   
       
      mkdir  -p   /home/db2inst1/archive/AAEDB 
      db2 update db cfg for AAEDB using LOGINDEXBUILD  on 
      db2 update db cfg for AAEDB using LOGARCHMETH1   disk:/home/db2inst1/archive/AAEDB   
       
      mkdir  -p   /home/db2inst1/archive/ICNDB 
      db2 update db cfg for ICNDB using LOGINDEXBUILD  on 
      db2 update db cfg for ICNDB using LOGARCHMETH1   disk:/home/db2inst1/archive/ICNDB   
       
      mkdir  -p   /home/db2inst1/archive/BAWDB 
      db2 update db cfg for BAWDB using LOGINDEXBUILD  on 
      db2 update db cfg for BAWDB using LOGARCHMETH1   disk:/home/db2inst1/archive/BAWDB   
       
      mkdir  -p   /home/db2inst1/archive/DOCSDB 
      db2 update db cfg for DOCSDB using LOGINDEXBUILD  on 
      db2 update db cfg for DOCSDB using LOGARCHMETH1   disk:/home/db2inst1/archive/DOCSDB   
       
      mkdir  -p   /home/db2inst1/archive/DOSDB 
      db2 update db cfg for DOSDB using LOGINDEXBUILD  on 
      db2 update db cfg for DOSDB using LOGARCHMETH1   disk:/home/db2inst1/archive/DOSDB 
      
      mkdir  -p   /home/db2inst1/archive/BASDB 
      db2 update db cfg for BASDB using LOGINDEXBUILD  on 
      db2 update db cfg for BASDB using LOGARCHMETH1   disk:/home/db2inst1/archive/BASDB 
      
      mkdir  -p   /home/db2inst1/archive/APPDB 
      db2 update db cfg for APPDB using LOGINDEXBUILD  on 
      db2 update db cfg for APPDB using LOGARCHMETH1   disk:/home/db2inst1/archive/APPDB 
      
      mkdir  -p   /home/db2inst1/archive/ADPBASE 
      db2 update db cfg for ADPBASE using LOGINDEXBUILD  on 
      db2 update db cfg for ADPBASE using LOGARCHMETH1   disk:/home/db2inst1/archive/ADPBASE 
      
      mkdir  -p   /home/db2inst1/archive/PROJ1 
      db2 update db cfg for PROJ1 using LOGINDEXBUILD  on 
      db2 update db cfg for PROJ1 using LOGARCHMETH1   disk:/home/db2inst1/archive/PROJ1
      
      mkdir  -p   /home/db2inst1/archive/DEVOS1 
      db2 update db cfg for DEVOS1 using LOGINDEXBUILD  on 
      db2 update db cfg for DEVOS1 using LOGARCHMETH1   disk:/home/db2inst1/archive/DEVOS1
      
      mkdir  -p   /home/db2inst1/archive/AEOS 
      db2 update db cfg for AEOS using LOGINDEXBUILD  on 
      db2 update db cfg for AEOS using LOGARCHMETH1   disk:/home/db2inst1/archive/AEOS
    2. Terminate your database connections to prevent errors while backing up:
      db2 force applications all 
    3. Complete the online backup, by running the following commands. For example,
      mkdir -p /home/db2inst1/backup/2301/online  
      db2 backup db TOSDB online to /home/db2inst1/backup/2301/online 
      db2 backup db GCDDB online to /home/db2inst1/backup/2301/online 
      db2 backup db AAEDB online to /home/db2inst1/backup/2301/online 
      db2 backup db ICNDB online to /home/db2inst1/backup/2301/online 
      db2 backup db BAWDB online to /home/db2inst1/backup/2301/online 
      db2 backup db DOCSDB online to /home/db2inst1/backup/2301/online 
      db2 backup db DOSDB online to /home/db2inst1/backup/2301/online 
      db2 backup db BASDB online to /home/db2inst1/backup/2301/online 
      db2 backup db APPDB online to /home/db2inst1/backup/2301/online 
      db2 backup db ADPBASE online to /home/db2inst1/backup/2301/online 
      db2 backup db PROJ1 online to /home/db2inst1/backup/2301/online 
      db2 backup db DEVOS1 online to /home/db2inst1/backup/2301/online 
      db2 backup db AEOS online to /home/db2inst1/backup/2301/online 
  9. If you use Business Automation Insights, back up the data.
    Business Automation Insights stores data in two different places. In addition, you are responsible for putting in place backup and restore processes for the Kafka server, which is configured through Cloud Pak for Business Automation.
  10. If necessary, back up the Lightweight Directory Access Protocol (LDAP) files. Different types of LDAP servers have different backup methods. Make sure that the restored data in the LDAP database is the same as the source LDAP.
    For IBM Security Directory Server see IBM Security Directory Server backup and restore.
  11. To back up Business Teams Service (BTS), see Backing up and restoring.
  12. Complete the backup procedures for the following components that you configured in your environment.

What to do next

When the back up is complete, scale up your environment pods by running the following commands.
for i in `oc get deploy -o name |grep icp4adeploy`; do echo "  start $i" ; oc scale $i --replicas=1; done
for i in `oc get sts -o name |grep icp4adeploy`; do echo "  start $i" ; oc scale $i --replicas=1; done
echo "  start operators ..."
oc scale deploy ibm-cp4a-operator --replicas=1
oc scale deploy ibm-pfs-operator  --replicas=1
oc scale deploy ibm-content-operator  --replicas=1