Preparing storage

 Containers: 
 V20.x:  Prepare storage, including the required persistent volumes (PVs) and persistent volume claim (PVCs) for the operator,  V20.0.0.2  IBM® Business Automation Application Engine, IBM Business Automation Navigator, IBM FileNet® Content Manager,  V20.0.0.2  Intelligent Task Prioritization,  V20.0.0.2  Workforce Insights, Java™ Message Service (JMS), IBM Process Federation Server, Elasticsearch, and IBM Business Automation Workflow.

Before you begin

For non-OpenShift® Container Platforms (OCP), make sure that ansible-operator, otherwise known as userID 1001, has write permissions for the storage directory for operator pod persistent volume claims (PVC). You also need to make sure that dba-user, otherwise known as userID 50001, has permissions for the storage directory for non-operator pod PVCs.

Procedure

  1. Prepare storage for the operator.
    The operator requires a persistent volume (PV) and persistent volume claim (PVC) to store the database JDBC driver. The following example illustrates the procedure using Network File System (NFS). An existing NFS server is required before you create PVs and PVCs.
    1. Create a directory for the operator on the NFS server.
      • If you are using Db2®, run the following command on the NFS server:
        mkdir -p NFS_storage_directory/db/jdbc/db2
        chown -R :65534 NFS_storage_directory/db
        chmod -R g+rw NFS_storage_directory/db
      •  V20.0.0.2  If you are using Oracle, run the following command on the NFS server:
        mkdir -p NFS_storage_directory/db/jdbc/oracle
        chown -R :65534 NFS_storage_directory/db
        chmod -R g+rw NFS_storage_directory/db
      •  V20.0.0.2  If you are using PostgreSQL, run the following command on the NFS server:
        mkdir -p NFS_storage_directory/db/jdbc/postgresql
        chown -R :65534 NFS_storage_directory/db
        chmod -R g+rw NFS_storage_directory/db
    2. Copy the JDBC JAR files to the new operator directory.
      • If you are using Db2, copy db2jcc4.jar and db2jcc_license_cu.jar to NFS_storage_directory/db/jdbc/db2.
      •  V20.0.0.2  If you are using Oracle, copy ojbdc8.jar to NFS_storage_directory/db/jdbc/oracle.
      •  V20.0.0.2  If you are using PostgreSQL, copy a PostgreSQL JDBC 4.2 driver JAR file to NFS_storage_directory/db/jdbc/postgresql.
    3. In the /etc/exports configuration file, add the following line at the end:
      NFS_storage_directory *(rw,sync,no_subtree_check)
    4. After you edit and save the /etc/exports configuration file, you must restart the NFS service for the changes to take effect. The following example shows restarting the NFS server on RHEL 7:
      systemctl stop nfs
      systemctl stop rpcbind
      systemctl start rpcbind
      systemctl start nfs
    5. To create the shared PV and PVC that are required by the operator, create a file that is named operator-shared-pv-pvc.yaml with the following contents:
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: operator-shared-pv
      spec:
        storageClassName: "ecm-openshift-nfs"
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 1Gi
        nfs:
          path: NFS_storage_directory/db
          server: NFS_server_IP
        persistentVolumeReclaimPolicy: Recycle
      ---
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: operator-shared-pvc
        annotations:
          volume.beta.kubernetes.io/storage-class: "ecm-openshift-nfs"
        labels:
          app.kubernetes.io/instance: ibm-dba
          app.kubernetes.io/managed-by: ibm-dba
          app.kubernetes.io/name: ibm-dba
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 1Gi
      where
      • NFS_storage_directory is the operator folder on your NFS server
      • NFS_server_IP is the IP address of your NFS server
    6. Run the following command on the OpenShift master node:
      oc apply -f operator-shared-pv-pvc.yaml
    7.  V20.0.0.2  If you are using Oracle or PostgreSQL, modify the baw_configuration section of the custom resource template under the database data configuration to use the custom JDBC drivers, custom JDBC PVC name, and JDBC driver files.
  2.  V20.0.0.2  Prepare storage for Application Engine, by following the instructions in Implementing storage.
  3. Prepare storage for IBM Business Automation Navigator, by following the instructions in Creating Business Automation Navigator volumes and folders for deployment on Kubernetes.
    Important: The value for the baw_configuration[x].case.network_shared_directory_pvc (the persistent volume claim (PVC) name for the case network shared directory) parameter in the custom resource must be set to the same value as the Business Automation Navigator pvc_for_icn_pluginstore parameter.
  4. Prepare storage for FileNet Content Manager, by following the instructions in Configuring storage for the content services environment.
  5.  V20.0.0.2  Prepare storage for Intelligent Task Prioritization.
    The Intelligent Task Prioritization feature requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
    • Option 1: If your environment supports dynamic provisioning:

      Enable dynamic provisioning by setting baml_configuration.intelligent_task_prioritization.storage.use_dynamic_provisioning to true. The trained model files will use shared_configuration.storage_configuration.sc_fast_file_storage_classname in the PV and the log file will use shared_configuration.storage_configuration.sc_medium_file_storage_classname in the PV.

    • Option 2: If your environment does not support dynamic provisioning:

      Disable dynamic provisioning by setting baml_configuration.intelligent_task_prioritization.storage.use_dynamic_provisioning to false. Then, create two PVs and PVCs manually and set baml_configuration.intelligent_task_prioritization.storage.existing_pvc_for_logstore to the name of the log PVC and baml_configuration.intelligent_task_prioritization.storage.existing_pvc_for_trained_pipelines to the name of the trained pipelines PVC in the custom resource file.

    The following example illustrates the Intelligent Task Prioritization PV and PVCs creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.
    1. Create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
      mkdir -p NFS_storage_directory/baml-itp/logstore
      mkdir -p NFS_storage_directory/baml-itp/trained-pipelines
      
      chown -R :65534 NFS_storage_directory/baml-itp/logstore
      chmod -R g+rw NFS_storage_directory/baml-itp/logstore
      chown -R :65534 NFS_storage_directory/baml-itp/trained-pipelines
      chmod -R g+rw NFS_storage_directory/baml-itp/trained-pipelines
    2. Create the PVs required by Intelligent Task Prioritization by saving the following YAML files on the OpenShift master node and then running the oc apply -f YAML_file_name command on each of the files in the following order:
      1. baml-itp-logstore-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baml-itp-logstore-pv
      spec:
        storageClassName: baml-itp-logstore-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
        nfs:
          path: NFS_storage_directory/baml-itp/logstore
          server: NFS_server_IP
        persistentVolumeReclaimPolicy: Recycle
      where
      • NFS_storage_directory is the storage folder on your NFS server
      • NFS_server_IP is the IP address of your NFS server
      2. baml-itp-logstore-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baml-itp-logstore-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        storageClassName: baml-itp-logstore-pv
        volumeName: baml-itp-logstore-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
      3. baml-itp-trained-pipelines-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baml-itp-trained-pipelines-pv
      spec:
        storageClassName: baml-itp-trained-pipelines-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
        nfs:
          path: NFS_storage_directory/baml-itp/trained-pipelines
          server: NFS_server_IP
        persistentVolumeReclaimPolicy: Recycle
      where
      • NFS_storage_directory is the storage folder on your NFS server
      • NFS_server_IP is the IP address of your NFS server
      4. baml-itp-trained-pipelines-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baml-itp-trained-pipelines-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        storageClassName: baml-itp-trained-pipelines-pv
        volumeName: baml-itp-trained-pipelines-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
  6.  V20.0.0.2  Prepare storage for Workforce Insights.
    The Workforce Insights feature requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
    • Option 1: If your environment supports dynamic provisioning:

      Enable dynamic provisioning by setting baml_configuration.workforce_insights.storage.use_dynamic_provisioning to true. The log file will use shared_configuration.storage_configuration.sc_medium_file_storage_classname in the PV.

    • Option 2: If your environment does not support dynamic provisioning:

      Disable dynamic provisioning by setting baml_configuration.workforce_insights.storage.use_dynamic_provisioning to false. Then, create two PVs and PVCs manually and set baml_configuration.workforce_insights.storage.existing_pvc_for_logstore to the name of the log PVC in the custom resource file.

    The following example illustrates the Workforce Insights PV and PVCs creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.
    1. Create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
      mkdir -p NFS_storage_directory/baml-wfi/logstore
      
      chown -R :65534 NFS_storage_directory/baml-wfi/logstore
      chmod -R g+rw NFS_storage_directory/baml-wfi/logstore
    2. Create the PVs required by Workforce Insights by saving the following YAML files on the OpenShift master node and then running the oc apply -f YAML_file_name command on each of the files in the following order:
      1. baml-wfi-logstore-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baml-wfi-logstore-pv
      spec:
        storageClassName: baml-wfi-logstore-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
        nfs:
          path: NFS_storage_directory/baml-wfi/logstore
          server: NFS_server_IP
        persistentVolumeReclaimPolicy: Recycle
      where
      • NFS_storage_directory is the storage folder on your NFS server
      • NFS_server_IP is the IP address of your NFS server
      2. baml-wfi-logstore-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baml-wfi-logstore-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        storageClassName: baml-wfi-logstore-pv
        volumeName: baml-wfi-logstore-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
  7. Prepare storage for the Java Message Service (JMS).
    The JMS component requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
    • Option 1: If your environment supports dynamic provisioning:

      Enable dynamic provisioning by setting baw_configuration[x].jms.storage.use_dynamic_provisioning to true and provide the storage class name of baw_configuration[x].jms.storage.storage_class in the custom resource file.

    • Option 2: If your environment does not support dynamic provisioning:

      Disable dynamic provisioning by setting baw_configuration[x].jms.storage.use_dynamic_provisioning to false. Then, create a PV manually and set baw_configuration[x].jms.storage.storage_class in the custom resource file to the value of the storageClassName property of your PV.

    The following example illustrates the JMS PV creation procedure that uses NFS. An existing NFS server is required before you create PVs.
    1. Create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
      mkdir -p NFS_storage_directory/jms
      chown -R :65534 NFS_storage_directory/jms
      chmod -R g+rw NFS_storage_directory/jms
    2. Create the PV required by JMS by creating a YAML file called jms-pv.yaml on the OpenShift master node with the following contents:
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: jms-pv-baw
      spec:
        storageClassName: "jms-storage-class"
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 2Gi
        nfs:
          path: NFS_storage_directory/jms
          server: NFS_server_IP
        persistentVolumeReclaimPolicy: Recycle
      where
      • NFS_storage_directory is the storage folder on your NFS server
      • accessModes is set to the same value as the baw_configuration[x].jms.storage.access_modes property in the custom resource configuration file
      • NFS_server_IP is the IP address of your NFS server
    3. Run the following command:
      oc apply -f jms-pv.yaml
  8. Prepare storage for IBM Process Federation Server.
    The Process Federation Server component requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
    • Option 1: If your environment supports dynamic provisioning:

      Enable dynamic provisioning by setting pfs_configuration.logs.storage.use_dynamic_provisioning to true and provide the storage class name of pfs_configuration.logs.storage.storage_class in the custom resource file.

    • Option 2: If your environment does not support dynamic provisioning:

      Disable dynamic provisioning by setting pfs_configuration.logs.storage.use_dynamic_provisioning to false. Then, create a PV manually and set pfs_configuration.logs.storage.storage_class in the custom resource file to the value of the storageClassName property of your PV.

    The following example illustrates the Process Federation Server PV creation procedure that uses NFS. An existing NFS server is required before you create PVs.
    1. Create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
       V20.0.0.2 
      mkdir -p NFS_storage_directory/pfs-logs
      
      chown -R :65534 NFS_storage_directory/pfs-logs
      chmod -R g+rw NFS_storage_directory/pfs-logs
       V20.0.0.1 
      mkdir -p NFS_storage_directory/pfs-logs-0
      
      chown -R :65534 NFS_storage_directory/pfs-logs-0
      chmod -R g+rw NFS_storage_directory/pfs-logs-0
    2. Create the PV required by Process Federation Server on the OpenShift master node,
       V20.0.0.2  Create the pfs-pv-pfs-logs.yaml file with the following content:
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pfs-logs-pv
      spec:
        storageClassName: "pfs-logs"
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
        nfs:
          path: NFS_storage_directory/pfs-logs
          server: NFS_server_IP
          persistentVolumeReclaimPolicy: Recycle
      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pfs-logs-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        storageClassName: pfs-logs
        volumeName: pfs-logs-pv
      • NFS_storage_directory is the storage folder on your NFS server
      • NFS_server_IP is the IP address of your NFS server
       V20.0.0.1  The number of PVs to be created depends on your pfs_configuration.replicas setting. Create the same number of PVs as replicas. Create the pfs-pv-pfs-logs-0.yaml file with the following content:
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pfs-logs-0
      spec:
        storageClassName: "pfs-logs"
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 5Gi
        nfs:
          path: NFS_storage_directory/pfs-logs-0
          server: NFS_server_IP
        persistentVolumeReclaimPolicy: Recycle
      
      where
      • NFS_storage_directory is the storage folder on your NFS server
      • NFS_server_IP is the IP address of your NFS server
    3. Run the following command:
       V20.0.0.2 
      oc apply -f pfs-pv-pvc-logs.yaml
       V20.0.0.1 
      oc apply -f pfs-pv-pfs-logs-0.yaml
  9. Prepare storage for Elasticsearch.
    The Elasticsearch component on Process Federation Server requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
    • Option 1: If your environment supports dynamic provisioning:

      Enable dynamic provisioning by setting elasticsearch_configuration.storage.use_dynamic_provisioning to true and provide the storage class name of elasticsearch_configuration.storage.storage_class in the custom resource file.

    • Option 2: If your environment does not support dynamic provisioning:

      Disable dynamic provisioning by setting elasticsearch_configuration.storage.use_dynamic_provisioning to false. Then, create a PV manually and set elasticsearch_configuration.storage.storage_class in the custom resource file to the value of the storageClassName property of your PV.

    The following example illustrates the Elasticsearch PV creation procedure that uses NFS. An existing NFS server is required before you create PVs.
    1. Create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
      mkdir -p NFS_storage_directory/pfs-es-0
      
      chown -R :65534 NFS_storage_directory/pfs-es-0
      chmod -R g+rw NFS_storage_directory/pfs-es-0
      
    2. Create the PV required by Elasticsearch by creating a YAML file called pfs-pv-pfs-es-0.yaml on the OpenShift master node with the following contents:
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pfs-es-0
      spec:
        storageClassName: "pfs-es"
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 10Gi
        nfs:
          path: NFS_storage_directory/pfs-es-0
          server: NFS_server_IP
        persistentVolumeReclaimPolicy: Recycle
      
      where
      • NFS_storage_directory is the storage folder on your NFS server
      • NFS_server_IP is the IP address of your NFS server
      • The numbers of PVs to be created depends on your elasticsearch_configuration. replicas setting. Create the same number of PVs as replicas.
    3. Run the following command:
      oc apply -f pfs-pv-pfs-es-0.yaml
  10. Prepare storage for IBM Business Automation Workflow.
    The IBM Business Automation Workflow component requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
    • Option 1: If your environment supports dynamic provisioning:

      Enable dynamic provisioning by setting baw_configuration.storage.use_dynamic_provisioning to true. The dump file will use shared_configuration.storage_configuration.sc_slow_file_storage_classname in the PV and the log file will use shared_configuration.storage_configuration.sc_medium_file_storage_classname in the PV.

    • Option 2: If your environment does not support dynamic provisioning:

      Disable dynamic provisioning by setting baw_configuration.storage.use_dynamic_provisioning to false. Then, create the required PVs and PVCs manually and set baw_configuration.storage.existing_pvc_for_logstore to the name of the log PVC and baw_configuration.storage.existing_pvc_for_dumpstore to the name of the dump PVC in the custom resource file.  V20.0.0.2  Also set baw_configuration.storage.existing_pvc_for_filestore to the name of the file PVC.

    The following example illustrates the IBM Business Automation Workflow PV and PVC creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.
    1. Create related folders on an NFS server. You must grant minimal privileges to the NFS server. Give the least privilege to the mounted directories using the following commands:
      mkdir -p NFS_storage_directory/baw/logstore
      mkdir -p NFS_storage_directory/baw/dumpstore
       V20.0.0.2  mkdir -p NFS_storage_directory/baw/filestore
      
      chown -R :65534 NFS_storage_directory/baw/logstore
      chmod -R g+rw NFS_storage_directory/baw/logstore
      chown -R :65534 NFS_storage_directory/baw/dumpstore
      chmod -R g+rw NFS_storage_directory/baw/dumpstore
       V20.0.0.2  
      chown -R :65534 NFS_storage_directory/baw/filestore
       V20.0.0.2 
      chmod -R g+rw NFS_storage_directory/baw/filestore
    2. Create the IBM Business Automation Workflow required PVs by saving the following YAML files on the OpenShift master node and then running the oc apply -f YAML_file_name command on the files in the following order:
      1. baw-logstore-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baw-logstore-pv
      spec:
        storageClassName: baw-logstore-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
        nfs:
          path: NFS_storage_directory/baw/logstore
          server: NFS_server_IP
        persistentVolumeReclaimPolicy: Recycle
      where
      • NFS_storage_directory is the storage folder on your NFS server
      • NFS_server_IP is the IP address of your NFS server
      2. baw-logstore-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baw-logstore-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        storageClassName: baw-logstore-pv
        volumeName: baw-logstore-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
      
      3. baw-dumpstore-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baw-dumpstore-pv
      spec:
        storageClassName: baw-dumpstore-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
        nfs:
          path: NFS_storage_directory/baw/dumpstore
          server: NFS_server_IP
        persistentVolumeReclaimPolicy: Recycle
      4. baw-dumpstore-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baw-dumpstore-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        storageClassName: baw-dumpstore-pv
        volumeName: baw-dumpstore-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
       V20.0.0.2  5. baw-filestore-pv.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: baw-filestore-pv
      spec:
        storageClassName: baw-filestore-pv
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi
        nfs:
          path: NFS_storage_directory/baw/filestore
          server: NFS_server_IP
        persistentVolumeReclaimPolicy: Recycle
       V20.0.0.2  6. baw-filestore-pvc.yaml
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: baw-filestore-pvc
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        storageClassName: baw-filestore-pv
        volumeName: baw-filestore-pv
      status:
        accessModes:
        - ReadWriteMany
        capacity:
          storage: 10Gi