Backing up your environment
It is important to back up your data so that you can resume work as quickly and effectively as possible.
Before you begin
For working instructions to back up and restore Cloud Pak foundational services, see IBM Cloud Pak foundational services backup and restore.
For all mentions of icp4adeploy in the provided examples, replace it with the
value you set for metadata.name in your IBM
Cloud Pak for Business Automation custom resource (CR)
file.
Before you start to back up your environment, stop your environment to prevent changes in your persistent volumes (PVs) and database. If you do not stop your environment, your PV data and databases might not be backed up properly.
oc scale deploy ibm-cp4a-operator --replicas=0
oc scale deploy ibm-pfs-operator --replicas=0
oc scale deploy ibm-content-operator --replicas=0
for i in `oc get deploy -o name |grep icp4adeploy`; do oc scale $i --replicas=0; done
for i in `oc get sts -o name |grep icp4adeploy`; do oc scale $i --replicas=0; doneAbout this task
cert-manager to set up the TLS key and certificate secrets.Use the following steps to back up Cloud Pak for Business Automation in a multiple-zone environment.
Procedure
- Make copies of the Cloud Pak custom resource (CR) files that are used in the primary and secondary environments. The custom resource (CR) file for a secondary environment has a different hostname from the primary environment.
-
Back up the security definitions in the following table. For more information, see Creating secrets to
protect sensitive configuration data.
Table 1. Secrets to back up Secrets Example secret name Cloud Pak for Business Automation secrets icp4adeploy-cpe-oidc-secretadmin-user-details
Images pull secret. Not present in an airgap environment. ibm-entitlement-keyLightweight Directory Access Protocol (LDAP) secret ldap-bind-secretLDAP SSL certificate secret. Required if you enabled SSL connection for LDAP. You must also back up the certificate file. ldap-ssl-certDatabase SSL certificate secret. Required if you enabled an SSL connection for the database. You must also back up the certificate file. For examples of secret names, see Preparing the databases. If you are using Db2, an example is: ibm-dba-db2-cacertShared encryption key secret ibm-iaws-shared-key-secretIBM Business Automation Workflow secret ibm-baw-wfs-server-db-secretProcess Federation Server admin secret ibm-pfs-admin-secretIBM Business Automation Application secrets ibm-aae-app-engine-secret/icp4adeploy-workspace-aae-app-engine-admin-secretResource Registry secret icp4adeploy-rr-admin-secretDatabase credential for Document Processing. ibm-aca-db-secretCP4BA database SSL secret. Required if you enabled SSL connection for the CP4BA database. ibm-cp4ba-db-ssl-secret-for-<dbServerAlias>The Automation Document Processing secret is configured in preparation for use with document processing. ibm-adp-secretThe IBM Github secret contains a certificate to secure a connection to the required Git server. git-tls-secretIBM Business Automation Navigator secret ibm-ban-secretIBM FileNet® Content Manager secret ibm-fncm-secretIBM Business Automation Studio secret ibm-bas-admin-secretApplication Engine playback server secret ibm-playback-server-admin-secretIBM Workflow Process Service Runtime admin secret <cr_name>-wfps-admin-secret - Backup your configmaps if required, you may have modified configmap specific to your use cases.
- Back up your PVC definitions and PV definitions based on your type of provisioning:
- If you are using static provisioning, back up your PVC definitions, PV definitions, and the content in the PV.
- If you are using dynamic provisioning, the operator creates the PVC definitions automatically,
so you need to back up the PVC definition. To back up the PVC definitions, get each definition and
modify the format so that the PVC can be deployed. The following sample script gets all the PVC
definitions. Reference the list of PVC definitions that are related to their capabilities and remove
the ones that you don't need.
#!/bin/sh NS=ibm-cp4ba pvcbackup() { oc get pvc -n $NS --no-headers=true | while read each do pvc=`echo $each | awk '{ print $1 }'` if [[ "$pvc" == ibm-bts-cnpg* ]] ; then # skip BTS PVCs continue fi echo "---" >> pvc.yaml kubectl get pvc $pvc -o yaml \ | yq eval 'del(.status, .metadata.finalizers, .metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields, .metadata.ownerReferences, .spec.volumeMode, .spec.volumeName)' - >> pvc.yaml done } pvcbackup
Do not back up the following PVC definitions. If you back up these definitions, you might encounter an error.Table 2. PVC definitions to back up Component Custom resource template persistent volume claim name Description Needs to be backed up or replicated IBM Business Automation Navigator icn-asperastoreIBM Business Automation Navigator storage for Aspera. No icn-cfgstoreBusiness Automation Navigator Liberty configuration. Yes icn-logstoreLiberty and Business Automation Navigator logs. Multiple IBM Content Navigator pods write logs here. No icn-pluginstoreBusiness Automation Navigator custom plug-ins. No icn-vw-cachestoreBusiness Automation Navigator storage for the Daeja ViewONE cache. No icn-vw-logstoreBusiness Automation Navigator viewer logs for the Daeja ViewONE. No data-iaf-system-elasticsearch-es-data-0iaf-system-elasticsearch-es-snap-main-pvcibm-bts-cnpg-bawent-cp4ba-bts-1user-home-pvc
- Back up all the content in the PVs if you are using dynamic provisioning. When you
restore the environment, you can use the backup definition and copy the content to the corresponding
PV to create the PVC. You can choose which files to restore on your environment later. The generated
folder names for dynamically provisioned PVs are not static. For example, the folder name might look
similar to
bawent-cmis-cfgstore-pvc-ctnrs-pvc-e5241e0c-3811-4c0d-8d0f-cb66dd67f672. The folder name is different for each deployment, so you must use a mapping folder to back up the content. The following script can be used to create backups of your PVs.#!/bin/sh NS=bawent SOURCE_PV_DIR=/home/pv/2401 BACKUP_PV_DIR=/home/backup pvbackup() { oc get pvc -n $NS --no-headers=true | while read each do pvc=`echo $each | awk '{ print $1 }'` pv=`echo $each | awk '{ print $3 }'` if [[ "$pvc" == ibm-bts-cnpg* ]] ; then # skip BTS PVCs continue fi if [ -d "$SOURCE_PV_DIR/$NS-$pvc-$pv" ] then echo "copying pv $pv " mkdir -p $BACKUP_PV_DIR/$pvc cp -r -a $SOURCE_PV_DIR/$NS-$pvc-$pv/. $BACKUP_PV_DIR/$pvc echo "" else echo "NOT FOUND for $pvc" fi done } pvbackup - If you are using IBM Workflow Process Service, make sure to
back up PVCs prefixed with
datasave.- Use the
kuberenetescommand to get its definition and back up the necessary parts in this definition. - Back up files under folder
<the_folder_for_datasave_PV>/messagingand keep theuser:groupinformation.
- Use the
- Make copies of the following files:
- JDBC drivers that depend on your database type. For more information, see the following links:
- Customized files that you put in the components PV for runtime. For example, customized font files.
- The configuration files that you use to set up your persistent storage, and your database server.
- If you have a database, back up the secure definition that is used to store the database username and password, and the configuration files that you used to set up your database server.
- If you have a database, back up the data in your database by using your preferred method.
The following table shows databases that need to be backed up.
Table 3. Databases that need to be backed up for each capability Capability Databases that need to be backed up IBM Automation Decision Services MongoDB databases that you are using for the decision designer or the decision runtime. IBM Automation Document Processing - Engine base database
- Engine tenant databases
IBM Automation Workstream Services The Db2®, Oracle, PostgreSQL, or SQL Server database that you are using. IBM Business Automation Workflow The Db2, Oracle, PostgreSQL, or SQL Server database that you are using. IBM FileNet Content Manager The databases for the Global Configuration Database and your object store. IBM Operational Decision Manager - Decision Center database
- Decision Server database
Database information can be found under the section
datasource_configurationof the custom resource file.IBM Workflow Process Service Authoring The default EDB PostgreSQL, or your own PostgreSQL database. IBM Workflow Process Service Runtime Your embedded or external PostgreSQL database. To configure backup and recovery for PostgreSQL, see Backup and Recovery.
If you are using Db2, you can complete an online or offline backup by completing the following steps.Run the following commands to complete an offline backup. If you want to do an online backup, you must also complete this step.mkdir -p /home/db2inst1/backup/2401 db2 backup db TOSDB to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db GCDDB to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db AAEDB to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db ICNDB to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db BAWDB to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db DOCSDB to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db DOSDB to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db BASDB to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db APPDB to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db ADPBASE to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db PROJ1 to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db DEVOS1 to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024 db2 backup db AEOS to /home/db2inst1/backup/2401 WITH 2 BUFFERS BUFFER 1024If you want an online backup, complete the following steps.
- Enable archival logging for each database in the environment. You can also configure
the interval between each backup.
mkdir -p /home/db2inst1/archive/TOSDB db2 update db cfg for TOSDB using LOGINDEXBUILD on db2 update db cfg for TOSDB using LOGARCHMETH1 disk:/home/db2inst1/archive/TOSDB mkdir -p /home/db2inst1/archive/GCDDB db2 update db cfg for GCDDB using LOGINDEXBUILD on db2 update db cfg for GCDDB using LOGARCHMETH1 disk:/home/db2inst1/archive/GCDDB mkdir -p /home/db2inst1/archive/AAEDB db2 update db cfg for AAEDB using LOGINDEXBUILD on db2 update db cfg for AAEDB using LOGARCHMETH1 disk:/home/db2inst1/archive/AAEDB mkdir -p /home/db2inst1/archive/ICNDB db2 update db cfg for ICNDB using LOGINDEXBUILD on db2 update db cfg for ICNDB using LOGARCHMETH1 disk:/home/db2inst1/archive/ICNDB mkdir -p /home/db2inst1/archive/BAWDB db2 update db cfg for BAWDB using LOGINDEXBUILD on db2 update db cfg for BAWDB using LOGARCHMETH1 disk:/home/db2inst1/archive/BAWDB mkdir -p /home/db2inst1/archive/DOCSDB db2 update db cfg for DOCSDB using LOGINDEXBUILD on db2 update db cfg for DOCSDB using LOGARCHMETH1 disk:/home/db2inst1/archive/DOCSDB mkdir -p /home/db2inst1/archive/DOSDB db2 update db cfg for DOSDB using LOGINDEXBUILD on db2 update db cfg for DOSDB using LOGARCHMETH1 disk:/home/db2inst1/archive/DOSDB mkdir -p /home/db2inst1/archive/BASDB db2 update db cfg for BASDB using LOGINDEXBUILD on db2 update db cfg for BASDB using LOGARCHMETH1 disk:/home/db2inst1/archive/BASDB mkdir -p /home/db2inst1/archive/APPDB db2 update db cfg for APPDB using LOGINDEXBUILD on db2 update db cfg for APPDB using LOGARCHMETH1 disk:/home/db2inst1/archive/APPDB mkdir -p /home/db2inst1/archive/ADPBASE db2 update db cfg for ADPBASE using LOGINDEXBUILD on db2 update db cfg for ADPBASE using LOGARCHMETH1 disk:/home/db2inst1/archive/ADPBASE mkdir -p /home/db2inst1/archive/PROJ1 db2 update db cfg for PROJ1 using LOGINDEXBUILD on db2 update db cfg for PROJ1 using LOGARCHMETH1 disk:/home/db2inst1/archive/PROJ1 mkdir -p /home/db2inst1/archive/DEVOS1 db2 update db cfg for DEVOS1 using LOGINDEXBUILD on db2 update db cfg for DEVOS1 using LOGARCHMETH1 disk:/home/db2inst1/archive/DEVOS1 mkdir -p /home/db2inst1/archive/AEOS db2 update db cfg for AEOS using LOGINDEXBUILD on db2 update db cfg for AEOS using LOGARCHMETH1 disk:/home/db2inst1/archive/AEOS - Terminate your database connections to prevent errors during the backup.
db2 force applications all - Complete the online backup by running the following commands.
mkdir -p /home/db2inst1/backup/2401/online db2 backup db TOSDB online to /home/db2inst1/backup/2401/online db2 backup db GCDDB online to /home/db2inst1/backup/2401/online db2 backup db AAEDB online to /home/db2inst1/backup/2401/online db2 backup db ICNDB online to /home/db2inst1/backup/2401/online db2 backup db BAWDB online to /home/db2inst1/backup/2401/online db2 backup db DOCSDB online to /home/db2inst1/backup/2401/online db2 backup db DOSDB online to /home/db2inst1/backup/2401/online db2 backup db BASDB online to /home/db2inst1/backup/2401/online db2 backup db APPDB online to /home/db2inst1/backup/2401/online db2 backup db ADPBASE online to /home/db2inst1/backup/2401/online db2 backup db PROJ1 online to /home/db2inst1/backup/2401/online db2 backup db DEVOS1 online to /home/db2inst1/backup/2401/online db2 backup db AEOS online to /home/db2inst1/backup/2401/online
- If you upgraded from version 21.0.3 to 24.0.0, you need to back up the
platform-oidc-credentialssecret.Make copies of the security definitions that are used to protect the configuration data in the primary and secondary environments. GetWLP_CLIENT_IDandWLP_CLIENT_SECRETfrom theibm-common-services/platform-oidc-credentialssecret on the backup cluster and keep them for later usage. - For each Business Automation Workflow or Workflow Process Service
instance which is either federated or where full text search is enabled, you must back up the indexed
data of the BPD runtime, the saved search definitions, and the reusable queries as follows: Note: Creating backups of case instances indexes is not supported, instead it is recommended that you rebuild case indexes as described in Rebuilding a case index.
- For instances running on containers where full text search is enabled, or for instances running on traditional WebSphere® Application Server and where Business Automation Workflow is configured to perform the indexing of BPD tasks and process instances into the Federated Data Repository (FDR), you must take regular snapshots of the Opensearch or Elasticsearch index related to the instance BPD runtime, and restore them on the backup environment as documented in the Failover support section of Understanding the federated data repository BPD indexing.
- If Process Federation Server is configured
to federate a Business Automation Workflow
instance running on-premise on traditional WebSphere Application
Server, and if the indexing of this
instance is configured in Process Federation Server with an
<ibmPfs_bpdIndexer>, take regular snapshots of the Opensearch or Elasticsearch index of this federated system by following the procedure described at Backing up and restoring Process Federation Server indexes in the federated data repository. - For all Business Automation Workflow or
Workflow Process Service instances
where full text search is enabled or which are federated using Process Federation Server, you must regularly back up the
saved search definitions and reusable queries, and restore them on the backup environment. The
Process Federation Server API is exposed on the
standalone Process Federation Server instance,
but an embedded Process Federation Server also
exposes its own API on each Business Automation Workflow and Workflow Process Service instance running on
containers where full text search is enabled. It is recommended that you perform backups of saved
search definitions and reusable queries on all running Process Federation Server instances (standalone Process Federation Server and embedded Process Federation Server instances):
- For saved search definitions, the Saved Search Transfer REST API to export and import Process Federation Server saved searches is documented at IBM Process Federation Server REST APIs.
- For reusable queries, you can directly use the Elasticsearch or Opensearch snapshots API as documented at Backing-up and restoring reusable search queries.
-
If you use Business Automation Insights, back up the
data. Business Automation Insights stores data in two different places.
- OpenSearch contains the time series and the summaries.
For more information, see Taking and restoring snapshots of OpenSearch data.
- Flink contains the state of event processing.
For more information, see Restarting from a checkpoint or savepoint.
- OpenSearch contains the time series and the summaries.
- If necessary, back up the Lightweight Directory Access Protocol (LDAP) files. Different
types of LDAP servers have different backup methods. Make sure that the restored data in the LDAP
database is the same as the source LDAP. For IBM Security Directory Server see IBM Security Directory Server backup and restore.
- To back up Business Teams Service (BTS), see Backing up and restoring.
- Complete the backup procedures for the following components that you configured in your
environment.
- IBM Automation Decision Services: Backing up Automation Decision Services
- IBM FileNet Content Manager: Backup and recovery
- IBM Business Automation Navigator: Backing up Content Navigator
What to do next
for i in `oc get deploy -o name |grep icp4adeploy`; do echo " start $i" ; oc scale $i --replicas=1; done
for i in `oc get sts -o name |grep icp4adeploy`; do echo " start $i" ; oc scale $i --replicas=1; done
echo " start operators ..."
oc scale deploy ibm-cp4a-operator --replicas=1
oc scale deploy ibm-pfs-operator --replicas=1
oc scale deploy ibm-content-operator --replicas=1