Preparing storage
Prepare storage, including the required persistent volumes (PVs) and persistent volume claim (PVCs) for the operator, Application Engine, IBM Business Automation Navigator, IBM FileNet® Content Manager, Intelligent Task Prioritization, Java™ Message Service (JMS), IBM Process Federation Server, and IBM Business Automation Workflow.
Procedure
- If you haven't done it already, prepare storage for the operator by following the instructions in Preparing a namespace for the Cloud Pak operator.
- Prepare storage for Application Engine, by following the instructions in Implementing storage.
- Prepare storage for IBM Business Automation Navigator, by following the instructions
in Creating volumes and folders for deployment on Kubernetes.
Important: The value for the
baw_configuration[x].case.network_shared_directory_pvc(the persistent volume claim (PVC) name for the case network shared directory) parameter in the custom resource must be set to the same value as the IBM Business Automation Navigatorpvc_for_icn_pluginstoreparameter. - Prepare storage for IBM FileNet Content Manager, by following the instructions in Configuring storage for the content services environment.
-
Prepare storage for Intelligent Task Prioritization.
Note: This step does not apply to stand-alone Business Automation Workflow.The Intelligent Task Prioritization feature requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
- Option 1: If your environment supports dynamic provisioning:
Enable dynamic provisioning by setting
baml_configuration.intelligent_task_prioritization.storage.use_dynamic_provisioningtotrue. The trained model files will useshared_configuration.storage_configuration.sc_fast_file_storage_classnamein the PV and the log file will useshared_configuration.storage_configuration.sc_medium_file_storage_classnamein the PV. - Option 2: If your environment does not support dynamic provisioning:
Disable dynamic provisioning by setting
baml_configuration.intelligent_task_prioritization.storage.use_dynamic_provisioningtofalse. Then, create two PVs and PVCs manually and setbaml_configuration.intelligent_task_prioritization.storage.existing_pvc_for_logstoreto the name of the log PVC andbaml_configuration.intelligent_task_prioritization.storage.existing_pvc_for_trained_pipelinesto the name of the trained pipelines PVC in the custom resource file.
For Option 2, the following example illustrates the Intelligent Task Prioritization PV and PVCs creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.-
Create related folders on an NFS server. You must grant minimal privileges to the NFS server.
Give the least privilege to the mounted directories using the following commands:
mkdir -p <NFS_STORAGE_DIRECTORY>/baml-itp/logstore mkdir -p <NFS_STORAGE_DIRECTORY>/baml-itp/trained-pipelines chown -R :65534 <NFS_STORAGE_DIRECTORY>/baml-itp/logstore chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baml-itp/logstore chown -R :65534 <NFS_STORAGE_DIRECTORY>/baml-itp/trained-pipelines chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baml-itp/trained-pipelines -
Create the PVs required by Intelligent Task Prioritization by saving the following
YAML files on the OpenShift master node and then running the
oc apply -f <YAML_FILE_NAME>command on each of the files in the following order:1. baml-itp-logstore-pv.yaml
whereapiVersion: v1 kind: PersistentVolume metadata: name: baml-itp-logstore-pv spec: storageClassName: baml-itp-logstore-pv accessModes: - ReadWriteMany capacity: storage: 1Gi nfs: path: <NFS_STORAGE_DIRECTORY>/baml-itp/logstore server: <NFS_SERVER_IP> persistentVolumeReclaimPolicy: Recycle<NFS_STORAGE_DIRECTORY>is the storage folder on your NFS server<NFS_SERVER_IP>is the IP address of your NFS server
2. baml-itp-logstore-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baml-itp-logstore-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: baml-itp-logstore-pv volumeName: baml-itp-logstore-pv status: accessModes: - ReadWriteMany capacity: storage: 1Gi3. baml-itp-trained-pipelines-pv.yaml
whereapiVersion: v1 kind: PersistentVolume metadata: name: baml-itp-trained-pipelines-pv spec: storageClassName: baml-itp-trained-pipelines-pv accessModes: - ReadWriteMany capacity: storage: 10Gi nfs: path: <NFS_STORAGE_DIRECTORY>/baml-itp/trained-pipelines server: <NFS_SERVER_IP> persistentVolumeReclaimPolicy: Recycle<NFS_STORAGE_DIRECTORY>is the storage folder on your NFS server<NFS_SERVER_IP>is the IP address of your NFS server
4. baml-itp-trained-pipelines-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baml-itp-trained-pipelines-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: baml-itp-trained-pipelines-pv volumeName: baml-itp-trained-pipelines-pv status: accessModes: - ReadWriteMany capacity: storage: 10Gi
- Option 1: If your environment supports dynamic provisioning:
-
Prepare storage for Workforce Insights.
Note: This step doesn't apply to stand-alone Business Automation Workflow.The Workforce Insights feature requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
- Option 1: If your environment supports dynamic provisioning:
Enable dynamic provisioning by setting
baml_configuration.workforce_insights.storage.use_dynamic_provisioningtotrue. The log file will useshared_configuration.storage_configuration.sc_medium_file_storage_classnamein the PV. - Option 2: If your environment does not support dynamic provisioning:
Disable dynamic provisioning by setting
baml_configuration.workforce_insights.storage.use_dynamic_provisioningtofalse. Then, create two PVs and PVCs manually and setbaml_configuration.workforce_insights.storage.existing_pvc_for_logstoreto the name of the log PVC in the custom resource file.
For Option 2, the following example illustrates the Workforce Insights PV and PVCs creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.-
Create related folders on an NFS server. You must grant minimal privileges to the NFS server.
Give the least privilege to the mounted directories using the following commands:
mkdir -p <NFS_STORAGE_DIRECTORY>/baml-wfi/logstore chown -R :65534 <NFS_STORAGE_DIRECTORY>/baml-wfi/logstore chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baml-wfi/logstore -
Create the PVs required by Workforce Insights by saving the following
YAML files on the OpenShift master node and then running the
oc apply -f <YAML_FILE_NAME>command on each of the files in the following order:1. baml-wfi-logstore-pv.yaml
whereapiVersion: v1 kind: PersistentVolume metadata: name: baml-wfi-logstore-pv spec: storageClassName: baml-wfi-logstore-pv accessModes: - ReadWriteMany capacity: storage: 1Gi nfs: path: <NFS_STORAGE_DIRECTORY>/baml-wfi/logstore server: <NFS_SERVER_IP> persistentVolumeReclaimPolicy: Recycle<NFS_STORAGE_DIRECTORY>is the storage folder on your NFS server<NFS_SERVER_IP>is the IP address of your NFS server
2. baml-wfi-logstore-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baml-wfi-logstore-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: baml-wfi-logstore-pv volumeName: baml-wfi-logstore-pv status: accessModes: - ReadWriteMany capacity: storage: 1Gi
- Option 1: If your environment supports dynamic provisioning:
-
Prepare storage for the Java Message Service (JMS).
The JMS component requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
- Option 1: If your environment supports dynamic provisioning:
Enable dynamic provisioning by setting
baw_configuration[x].jms.storage.use_dynamic_provisioningtotrueand provide the storage class name ofbaw_configuration[x].jms.storage.storage_classin the custom resource file. - Option 2: If your environment does not support dynamic provisioning:
Disable dynamic provisioning by setting
baw_configuration[x].jms.storage.use_dynamic_provisioningtofalse. Then, create a PV manually and setbaw_configuration[x].jms.storage.storage_classin the custom resource file to the value of thestorageClassNameproperty of your PV.
For Option 2, the following example illustrates the JMS PV creation procedure that uses NFS. An existing NFS server is required before you create PVs.-
Create related folders on an NFS server. You must grant minimal privileges to the NFS server.
Give the least privilege to the mounted directories using the following commands:
mkdir -p <NFS_STORAGE_DIRECTORY>/jms chown -R :65534 <NFS_STORAGE_DIRECTORY>/jms chmod -R g+rw <NFS_STORAGE_DIRECTORY>/jms -
Create the PV required by JMS by creating a YAML file called
jms-pv.yamlon the OpenShift master node with the following contents:
whereapiVersion: v1 kind: PersistentVolume metadata: name: jms-pv-baw spec: storageClassName: "jms-storage-class" accessModes: - ReadWriteOnce capacity: storage: 2Gi nfs: path: <NFS_STORAGE_DIRECTORY>/jms server: <NFS_SERVER_IP> persistentVolumeReclaimPolicy: Recycle<NFS_STORAGE_DIRECTORY>is the storage folder on your NFS serveraccessModesis set to the same value as thejms.storage.access_modesproperty in the custom resource configuration file<NFS_SERVER_IP>is the IP address of your NFS server
-
Run the following command:
oc apply -f jms-pv.yaml
- Option 1: If your environment supports dynamic provisioning:
-
Prepare storage for Process Federation Server.
The Process Federation Server component requires a PV for logs to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning. You can optionally choose to persist dump files by setting
pfs_configuration.dump.persistenttotrue.- Option 1: If your environment supports dynamic provisioning:
Enable dynamic provisioning by setting
pfs_configuration.logs.storage.use_dynamic_provisioningtotrueand provide the storage class name ofpfs_configuration.logs.storage.storage_classin the custom resource file.If you also want to persist dump files, set
pfs_configuration.dump.persistenttotrue. - Option 2: If your environment does not support dynamic provisioning:
Disable dynamic provisioning by setting
pfs_configuration.logs.storage.use_dynamic_provisioningtofalse. Then, create a PV manually and setpfs_configuration.logs.storage.existing_pvc_namein the custom resource file to the value of thenameproperty of your PV.To persist dump files, disable dynamic provisioning by setting
pfs_configuration.dump.storage.use_dynamic_provisioningtofalse. Then, create a PV manually and setpfs_configuration.dump.storage.existing_pvc_namein the custom resource file to the value of thenameproperty of your PV.
For Option 2, the following example illustrates the Process Federation Server PV creation procedure that uses NFS. An existing NFS server is required before you create PVs.-
Create related folders on an NFS server. You must grant minimal privileges to the NFS server.
Give the least privilege to the mounted directories using the following commands:
mkdir -p <NFS_STORAGE_DIRECTORY>/pfs-logs chown -R :65534 <NFS_STORAGE_DIRECTORY>/pfs-logs chmod -R g+rw <NFS_STORAGE_DIRECTORY>/pfs-logs -
Create the PV required by Process Federation Server by
creating a YAML file on the OpenShift master node.
Create the
pfs-pv-pvc-logs.yamlfile with the following content:
whereapiVersion: v1 kind: PersistentVolume metadata: name: pfs-logs-pv spec: storageClassName: "pfs-logs" accessModes: - ReadWriteOnce capacity: storage: 1Gi nfs: path: <NFS_STORAGE_DIRECTORY>/pfs-logs server: <NFS_SERVER_IP> persistentVolumeReclaimPolicy: Recycle --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pfs-logs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: pfs-logs volumeName: pfs-logs-pv<NFS_STORAGE_DIRECTORY>is the storage folder on your NFS server<NFS_SERVER_IP>is the IP address of your NFS server
-
Run the following command:
oc apply -f pfs-pv-pvc-logs.yaml -
To persist dump files, create related folders on an NFS server. You must grant minimal
privileges to the NFS server. Give the least privilege to the mounted directories using the
following commands:
mkdir -p <NFS_STORAGE_DIRECTORY>/pfs-dump chown -R :65534 <NFS_STORAGE_DIRECTORY>/pfs-dump chmod -R g+rw <NFS_STORAGE_DIRECTORY>/pfs-dump -
Create the dump PV by creating a YAML file on the OpenShift master node.
Create the
pfs-pv-pvc-dump.yamlfile with the following content:
whereapiVersion: v1 kind: PersistentVolume metadata: name: pfs-dump-pv spec: storageClassName: "pfs-dump" accessModes: - ReadWriteOnce capacity: storage: 5Gi nfs: path: <NFS_STORAGE_DIRECTORY>/pfs-dump server: <NFS_SERVER_IP> persistentVolumeReclaimPolicy: Recycle --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pfs-dump-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: pfs-dump volumeName: pfs-dump-pv<NFS_STORAGE_DIRECTORY>is the storage folder on your NFS server<NFS_SERVER_IP>is the IP address of your NFS server
-
Run the following command:
oc apply -f pfs-pv-pvc-dump.yaml
- Option 1: If your environment supports dynamic provisioning:
-
Prepare storage for Business Automation Workflow.
The Business Automation Workflow component requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
- Option 1: If your environment supports dynamic provisioning:
Enable dynamic provisioning by setting
baw_configuration[x].storage.use_dynamic_provisioningtotrue. The dump file will useshared_configuration.storage_configuration.sc_slow_file_storage_classnamein the PV and the log file will useshared_configuration.storage_configuration.sc_medium_file_storage_classnamein the PV. - Option 2: If your environment does not support dynamic provisioning:
Disable dynamic provisioning by setting
baw_configuration[x].storage.use_dynamic_provisioningtofalse. Then, create the required PVs and PVCs manually and setbaw_configuration[x].storage.existing_pvc_for_logstoreto the name of the log PVC andbaw_configuration[x].storage.existing_pvc_for_dumpstoreto the name of the dump PVC in the custom resource file. Also setbaw_configuration[x].storage.existing_pvc_for_filestoreto the name of the file PVC.
For Option 2, the following example illustrates the Business Automation Workflow PV and PVC creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.-
Create related folders on an NFS server. You must grant minimal privileges to the NFS server.
Give the least privilege to the mounted directories using the following commands:
mkdir -p <NFS_STORAGE_DIRECTORY>/baw/logstore mkdir -p <NFS_STORAGE_DIRECTORY>/baw/dumpstore mkdir -p <NFS_STORAGE_DIRECTORY>/baw/filestore chown -R :65534 <NFS_STORAGE_DIRECTORY>/baw/logstore chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baw/logstore chown -R :65534 <NFS_STORAGE_DIRECTORY>/baw/dumpstore chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baw/dumpstore chown -R :65534 <NFS_STORAGE_DIRECTORY>/baw/filestore chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baw/filestore -
Create the Business Automation Workflow required PVs by
saving the following YAML files on the OpenShift master node and then running the
oc apply -f <YAML_FILE_NAME>command on the files in the following order:1. baw-logstore-pv.yaml
whereapiVersion: v1 kind: PersistentVolume metadata: name: baw-logstore-pv spec: storageClassName: baw-logstore-pv accessModes: - ReadWriteMany capacity: storage: 1Gi nfs: path: <NFS_STORAGE_DIRECTORY>/baw/logstore server: <NFS_SERVER_IP> persistentVolumeReclaimPolicy: Recycle<NFS_STORAGE_DIRECTORY>is the storage folder on your NFS server<NFS_SERVER_IP>is the IP address of your NFS server
2. baw-logstore-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baw-logstore-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: baw-logstore-pv volumeName: baw-logstore-pv status: accessModes: - ReadWriteMany capacity: storage: 1Gi3. baw-dumpstore-pv.yamlapiVersion: v1 kind: PersistentVolume metadata: name: baw-dumpstore-pv spec: storageClassName: baw-dumpstore-pv accessModes: - ReadWriteMany capacity: storage: 5Gi nfs: path: <NFS_STORAGE_DIRECTORY>/baw/dumpstore server: <NFS_SERVER_IP> persistentVolumeReclaimPolicy: Recycle4. baw-dumpstore-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baw-dumpstore-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: baw-dumpstore-pv volumeName: baw-dumpstore-pv status: accessModes: - ReadWriteMany capacity: storage: 5Gi5. baw-filestore-pv.yamlapiVersion: v1 kind: PersistentVolume metadata: name: baw-filestore-pv spec: storageClassName: baw-filestore-pv accessModes: - ReadWriteMany capacity: storage: 1Gi nfs: path: NFS_storage_directory/baw/filestore server: NFS_server_IP persistentVolumeReclaimPolicy: Recycle6. baw-filestore-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baw-filestore-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: baw-filestore-pv volumeName: baw-filestore-pv status: accessModes: - ReadWriteMany capacity: storage: 1Gi
- Option 1: If your environment supports dynamic provisioning:
What to do next
- To run IBM Business Automation Workflow with Business Automation Insights, see Preparing to install Business Automation Insights.
- To prepare for customizations, such as custom case widgets and custom case extensions, see Preparing your environment for customizations.
- To see a visual representation of the extended history for a case, see, such as custom case widgets and custom case extensions, see Optional: Enabling the Timeline Visualizer widget to display Business Automation Workflow process activity flow.