Preparing storage

Prepare storage, including the required persistent volumes (PVs) and persistent volume claim (PVCs) for the operator, Application Engine, IBM Business Automation Navigator, IBM FileNet® Content Manager, Intelligent Task Prioritization, Workforce Insights, Java™ Message Service (JMS), IBM Process Federation Server, and IBM Business Automation Workflow.

Procedure

  1. If you haven't done it already, prepare storage for the operator by following the instructions in Preparing the operator and log file storage.
  2. Prepare storage for Application Engine, by following the instructions in Implementing storage.
  3. Prepare storage for IBM Business Automation Navigator, by following the instructions in Creating volumes and folders for deployment on Kubernetes.
    Important: The value for the baw_configuration[x].case.network_shared_directory_pvc (the persistent volume claim (PVC) name for the case network shared directory) parameter in the custom resource must be set to the same value as the IBM Business Automation Navigator pvc_for_icn_pluginstore parameter.
  4. Prepare storage for IBM FileNet Content Manager, by following the instructions in Configuring storage for the content services environment.
  5. Prepare storage for Intelligent Task Prioritization.
    Note: This step doesn't apply to stand-alone Business Automation Workflow.
    The Intelligent Task Prioritization feature requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
    • Option 1: If your environment supports dynamic provisioning:

      Enable dynamic provisioning by setting baml_configuration.intelligent_task_prioritization.storage.use_dynamic_provisioning to true. The trained model files will use shared_configuration.storage_configuration.sc_fast_file_storage_classname in the PV and the log file will use shared_configuration.storage_configuration.sc_medium_file_storage_classname in the PV.

    • Option 2: If your environment does not support dynamic provisioning:

      Disable dynamic provisioning by setting baml_configuration.intelligent_task_prioritization.storage.use_dynamic_provisioning to false. Then, create two PVs and PVCs manually and set baml_configuration.intelligent_task_prioritization.storage.existing_pvc_for_logstore to the name of the log PVC and baml_configuration.intelligent_task_prioritization.storage.existing_pvc_for_trained_pipelines to the name of the trained pipelines PVC in the custom resource file.

    For Option 2, the following example illustrates the Intelligent Task Prioritization PV and PVCs creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.
    1. Create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
      mkdir -p <NFS_STORAGE_DIRECTORY>/baml-itp/logstore
      mkdir -p <NFS_STORAGE_DIRECTORY>/baml-itp/trained-pipelines
      
      chown -R :65534 <NFS_STORAGE_DIRECTORY>/baml-itp/logstore
      chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baml-itp/logstore
      chown -R :65534 <NFS_STORAGE_DIRECTORY>/baml-itp/trained-pipelines
      chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baml-itp/trained-pipelines
    2. Create the PVs required by Intelligent Task Prioritization by saving the following YAML files on the OpenShift master node and then running the oc apply -f <YAML_FILE_NAME> command on each of the files in the following order:
      1. baml-itp-logstore-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baml-itp-logstore-pv
      spec:
        storageClassName: baml-itp-logstore-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 1Gi
        nfs:
          path: <NFS_STORAGE_DIRECTORY>/baml-itp/logstore
          server: <NFS_SERVER_IP>
        persistentVolumeReclaimPolicy: Recycle
      where
      • <NFS_STORAGE_DIRECTORY> is the storage folder on your NFS server
      • <NFS_SERVER_IP> is the IP address of your NFS server
      2. baml-itp-logstore-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baml-itp-logstore-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 1Gi
        storageClassName: baml-itp-logstore-pv
        volumeName: baml-itp-logstore-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 1Gi
      3. baml-itp-trained-pipelines-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baml-itp-trained-pipelines-pv
      spec:
        storageClassName: baml-itp-trained-pipelines-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
        nfs:
          path: <NFS_STORAGE_DIRECTORY>/baml-itp/trained-pipelines
          server: <NFS_SERVER_IP>
        persistentVolumeReclaimPolicy: Recycle
      where
      • <NFS_STORAGE_DIRECTORY> is the storage folder on your NFS server
      • <NFS_SERVER_IP> is the IP address of your NFS server
      4. baml-itp-trained-pipelines-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baml-itp-trained-pipelines-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        storageClassName: baml-itp-trained-pipelines-pv
        volumeName: baml-itp-trained-pipelines-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
  6. Prepare storage for Workforce Insights.
    Note: This step doesn't apply to stand-alone Business Automation Workflow.
    The Workforce Insights feature requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
    • Option 1: If your environment supports dynamic provisioning:

      Enable dynamic provisioning by setting baml_configuration.workforce_insights.storage.use_dynamic_provisioning to true. The log file will use shared_configuration.storage_configuration.sc_medium_file_storage_classname in the PV.

    • Option 2: If your environment does not support dynamic provisioning:

      Disable dynamic provisioning by setting baml_configuration.workforce_insights.storage.use_dynamic_provisioning to false. Then, create two PVs and PVCs manually and set baml_configuration.workforce_insights.storage.existing_pvc_for_logstore to the name of the log PVC in the custom resource file.

    For Option 2, the following example illustrates the Workforce Insights PV and PVCs creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.
    1. Create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
      mkdir -p <NFS_STORAGE_DIRECTORY>/baml-wfi/logstore
      
      chown -R :65534 <NFS_STORAGE_DIRECTORY>/baml-wfi/logstore
      chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baml-wfi/logstore
    2. Create the PVs required by Workforce Insights by saving the following YAML files on the OpenShift master node and then running the oc apply -f <YAML_FILE_NAME> command on each of the files in the following order:
      1. baml-wfi-logstore-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baml-wfi-logstore-pv
      spec:
        storageClassName: baml-wfi-logstore-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 1Gi
        nfs:
          path: <NFS_STORAGE_DIRECTORY>/baml-wfi/logstore
          server: <NFS_SERVER_IP>
        persistentVolumeReclaimPolicy: Recycle
      where
      • <NFS_STORAGE_DIRECTORY> is the storage folder on your NFS server
      • <NFS_SERVER_IP> is the IP address of your NFS server
      2. baml-wfi-logstore-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baml-wfi-logstore-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 1Gi
        storageClassName: baml-wfi-logstore-pv
        volumeName: baml-wfi-logstore-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 1Gi
  7. Prepare storage for the Java Message Service (JMS).
    The JMS component requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
    • Option 1: If your environment supports dynamic provisioning:

      Enable dynamic provisioning by setting baw_configuration[x].jms.storage.use_dynamic_provisioning to true and provide the storage class name of baw_configuration[x].jms.storage.storage_class in the custom resource file.

    • Option 2: If your environment does not support dynamic provisioning:

      Disable dynamic provisioning by setting baw_configuration[x].jms.storage.use_dynamic_provisioning to false. Then, create a PV manually and set baw_configuration[x].jms.storage.storage_class in the custom resource file to the value of the storageClassName property of your PV.

    For Option 2, the following example illustrates the JMS PV creation procedure that uses NFS. An existing NFS server is required before you create PVs.
    1. Create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
      mkdir -p <NFS_STORAGE_DIRECTORY>/jms
      chown -R :65534 <NFS_STORAGE_DIRECTORY>/jms
      chmod -R g+rw <NFS_STORAGE_DIRECTORY>/jms
    2. Create the PV required by JMS by creating a YAML file called jms-pv.yaml on the OpenShift master node with the following contents:
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: jms-pv-baw
      spec:
        storageClassName: "jms-storage-class"
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 2Gi
        nfs:
          path: <NFS_STORAGE_DIRECTORY>/jms
          server: <NFS_SERVER_IP>
        persistentVolumeReclaimPolicy: Recycle
      where
      • <NFS_STORAGE_DIRECTORY> is the storage folder on your NFS server
      • accessModes is set to the same value as the jms.storage.access_modes property in the custom resource configuration file
      • <NFS_SERVER_IP> is the IP address of your NFS server
    3. Run the following command:
      oc apply -f jms-pv.yaml
  8. Prepare storage for Process Federation Server.
    The Process Federation Server component requires a PV for logs to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning. You can optionally choose to persist dump files by setting pfs_configuration.dump.persistent to true.
    • Option 1: If your environment supports dynamic provisioning:

      Enable dynamic provisioning by setting pfs_configuration.logs.storage.use_dynamic_provisioning to true and provide the storage class name of pfs_configuration.logs.storage.storage_class in the custom resource file.

      If you also want to persist dump files, set pfs_configuration.dump.persistent to true.

    • Option 2: If your environment does not support dynamic provisioning:

      Disable dynamic provisioning by setting pfs_configuration.logs.storage.use_dynamic_provisioning to false. Then, create a PV manually and set pfs_configuration.logs.storage.existing_pvc_name in the custom resource file to the value of the name property of your PV.

      To persist dump files, disable dynamic provisioning by setting pfs_configuration.dump.storage.use_dynamic_provisioning to false. Then, create a PV manually and set pfs_configuration.dump.storage.existing_pvc_name in the custom resource file to the value of the name property of your PV.

    For Option 2, the following example illustrates the Process Federation Server PV creation procedure that uses NFS. An existing NFS server is required before you create PVs.
    1. Create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
      mkdir -p <NFS_STORAGE_DIRECTORY>/pfs-logs
      
      chown -R :65534 <NFS_STORAGE_DIRECTORY>/pfs-logs
      chmod -R g+rw <NFS_STORAGE_DIRECTORY>/pfs-logs
    2. Create the PV required by Process Federation Server by creating a YAML file on the OpenShift master node.

      Create the pfs-pv-pvc-logs.yaml file with the following content:

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pfs-logs-pv
      spec:
        storageClassName: "pfs-logs"
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 1Gi
        nfs:
          path: <NFS_STORAGE_DIRECTORY>/pfs-logs
          server: <NFS_SERVER_IP>
        persistentVolumeReclaimPolicy: Recycle
      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pfs-logs-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 1Gi
        storageClassName: pfs-logs
        volumeName: pfs-logs-pv
      
      where
      • <NFS_STORAGE_DIRECTORY> is the storage folder on your NFS server
      • <NFS_SERVER_IP> is the IP address of your NFS server
    3. Run the following command:
      oc apply -f pfs-pv-pvc-logs.yaml
    4. To persist dump files, create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
      mkdir -p <NFS_STORAGE_DIRECTORY>/pfs-dump
      
      chown -R :65534 <NFS_STORAGE_DIRECTORY>/pfs-dump
      chmod -R g+rw <NFS_STORAGE_DIRECTORY>/pfs-dump
    5. Create the dump PV by creating a YAML file on the OpenShift master node.

      Create the pfs-pv-pvc-dump.yaml file with the following content:

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pfs-dump-pv
      spec:
        storageClassName: "pfs-dump"
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 5Gi
        nfs:
          path: <NFS_STORAGE_DIRECTORY>/pfs-dump
          server: <NFS_SERVER_IP>
        persistentVolumeReclaimPolicy: Recycle
      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pfs-dump-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 5Gi
        storageClassName: pfs-dump
        volumeName: pfs-dump-pv
      where
      • <NFS_STORAGE_DIRECTORY> is the storage folder on your NFS server
      • <NFS_SERVER_IP> is the IP address of your NFS server
    6. Run the following command:
      oc apply -f pfs-pv-pvc-dump.yaml
  9. Prepare storage for Business Automation Workflow.
    The Business Automation Workflow component requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
    • Option 1: If your environment supports dynamic provisioning:

      Enable dynamic provisioning by setting baw_configuration[x].storage.use_dynamic_provisioning to true. The dump file will use shared_configuration.storage_configuration.sc_slow_file_storage_classname in the PV and the log file will use shared_configuration.storage_configuration.sc_medium_file_storage_classname in the PV.

    • Option 2: If your environment does not support dynamic provisioning:

      Disable dynamic provisioning by setting baw_configuration[x].storage.use_dynamic_provisioning to false. Then, create the required PVs and PVCs manually and set baw_configuration[x].storage.existing_pvc_for_logstore to the name of the log PVC and baw_configuration[x].storage.existing_pvc_for_dumpstore to the name of the dump PVC in the custom resource file. Also set baw_configuration[x].storage.existing_pvc_for_filestore to the name of the file PVC.

    For Option 2, the following example illustrates the Business Automation Workflow PV and PVC creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.
    1. Create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
      mkdir -p <NFS_STORAGE_DIRECTORY>/baw/logstore
      mkdir -p <NFS_STORAGE_DIRECTORY>/baw/dumpstore
      mkdir -p <NFS_STORAGE_DIRECTORY>/baw/filestore
      
      chown -R :65534 <NFS_STORAGE_DIRECTORY>/baw/logstore
      chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baw/logstore
      chown -R :65534 <NFS_STORAGE_DIRECTORY>/baw/dumpstore
      chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baw/dumpstore
      chown -R :65534 <NFS_STORAGE_DIRECTORY>/baw/filestore 
      chmod -R g+rw <NFS_STORAGE_DIRECTORY>/baw/filestore
    2. Create the Business Automation Workflow required PVs by saving the following YAML files on the OpenShift master node and then running the oc apply -f <YAML_FILE_NAME> command on the files in the following order:
      1. baw-logstore-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baw-logstore-pv
      spec:
        storageClassName: baw-logstore-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 1Gi
        nfs:
          path: <NFS_STORAGE_DIRECTORY>/baw/logstore
          server: <NFS_SERVER_IP>
        persistentVolumeReclaimPolicy: Recycle
      where
      • <NFS_STORAGE_DIRECTORY> is the storage folder on your NFS server
      • <NFS_SERVER_IP> is the IP address of your NFS server
      2. baw-logstore-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baw-logstore-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 1Gi
        storageClassName: baw-logstore-pv
        volumeName: baw-logstore-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 1Gi
      
      3. baw-dumpstore-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baw-dumpstore-pv
      spec:
        storageClassName: baw-dumpstore-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 5Gi
        nfs:
          path: <NFS_STORAGE_DIRECTORY>/baw/dumpstore
          server: <NFS_SERVER_IP>
        persistentVolumeReclaimPolicy: Recycle
      4. baw-dumpstore-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baw-dumpstore-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 5Gi
        storageClassName: baw-dumpstore-pv
        volumeName: baw-dumpstore-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 5Gi
      5. baw-filestore-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baw-filestore-pv
      spec:
        storageClassName: baw-filestore-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 1Gi
        nfs:
          path: NFS_storage_directory/baw/filestore
          server: NFS_server_IP
        persistentVolumeReclaimPolicy: Recycle
      6. baw-filestore-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baw-filestore-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 1Gi
        storageClassName: baw-filestore-pv
        volumeName: baw-filestore-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 1Gi

What to do next

To protect the configuration data you're going to enter, see Creating secrets to protect sensitive configuration data.