Preparing storage
Containers:
V20.x: Prepare storage, including the required persistent
volumes (PVs) and persistent volume claim (PVCs) for the operator,
V20.0.0.2
IBM® Business Automation Application Engine, IBM Business Automation
Navigator, IBM
FileNet® Content Manager, V20.0.0.2
Intelligent Task Prioritization,
V20.0.0.2
Workforce Insights, Java™ Message Service (JMS), IBM Process Federation
Server, Elasticsearch, and IBM Business Automation
Workflow.
Before you begin
ansible-operator, otherwise known as userID 1001, has write
permissions for the storage directory for operator pod persistent volume claims (PVC). You also need
to make sure that dba-user, otherwise known as userID 50001, has
permissions for the storage directory for non-operator pod PVCs.Procedure
- Prepare storage for the operator. The operator requires a persistent volume (PV) and persistent volume claim (PVC) to store the database JDBC driver. The following example illustrates the procedure using Network File System (NFS). An existing NFS server is required before you create PVs and PVCs.
- Create a directory for the operator on the NFS server.
- If you are using Db2®, run the following command on the NFS server:
mkdir -p NFS_storage_directory/db/jdbc/db2 chown -R :65534 NFS_storage_directory/db chmod -R g+rw NFS_storage_directory/db - V20.0.0.2 If you are
using Oracle, run the following command on the NFS server:
mkdir -p NFS_storage_directory/db/jdbc/oracle chown -R :65534 NFS_storage_directory/db chmod -R g+rw NFS_storage_directory/db - V20.0.0.2 If you are
using PostgreSQL, run the following command on the
NFS server:
mkdir -p NFS_storage_directory/db/jdbc/postgresql chown -R :65534 NFS_storage_directory/db chmod -R g+rw NFS_storage_directory/db
- If you are using Db2®, run the following command on the NFS server:
-
Copy the JDBC JAR files to the new operator directory.
- If you are using Db2, copy
db2jcc4.jaranddb2jcc_license_cu.jartoNFS_storage_directory/db/jdbc/db2. - V20.0.0.2 If you are
using Oracle, copy
ojbdc8.jartoNFS_storage_directory/db/jdbc/oracle. - V20.0.0.2 If you are
using PostgreSQL, copy a PostgreSQL JDBC 4.2 driver JAR file to
NFS_storage_directory/db/jdbc/postgresql.
- If you are using Db2, copy
- In the
/etc/exportsconfiguration file, add the following line at the end:NFS_storage_directory *(rw,sync,no_subtree_check) -
After you edit and save the
/etc/exportsconfiguration file, you must restart the NFS service for the changes to take effect. The following example shows restarting the NFS server on RHEL 7:systemctl stop nfs systemctl stop rpcbind systemctl start rpcbind systemctl start nfs -
To create the shared PV and PVC that are required by the operator, create a file that is named
operator-shared-pv-pvc.yamlwith the following contents:
whereapiVersion: v1 kind: PersistentVolume metadata: name: operator-shared-pv spec: storageClassName: "ecm-openshift-nfs" accessModes: - ReadWriteMany capacity: storage: 1Gi nfs: path: NFS_storage_directory/db server: NFS_server_IP persistentVolumeReclaimPolicy: Recycle --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: operator-shared-pvc annotations: volume.beta.kubernetes.io/storage-class: "ecm-openshift-nfs" labels: app.kubernetes.io/instance: ibm-dba app.kubernetes.io/managed-by: ibm-dba app.kubernetes.io/name: ibm-dba spec: accessModes: - ReadWriteMany resources: requests: storage: 1GiNFS_storage_directoryis the operator folder on your NFS serverNFS_server_IPis the IP address of your NFS server
- Run the following command on the OpenShift master node:
oc apply -f operator-shared-pv-pvc.yaml -
V20.0.0.2 If you are using Oracle or
PostgreSQL, modify the
baw_configurationsection of the custom resource template under the database data configuration to use the custom JDBC drivers, custom JDBC PVC name, and JDBC driver files.
- Create a directory for the operator on the NFS server.
- V20.0.0.2 Prepare storage for Application Engine, by following the instructions in Implementing storage.
- Prepare storage for IBM Business Automation
Navigator, by following the instructions in Creating Business Automation Navigator volumes and folders for deployment on Kubernetes.
Important: The value for the
baw_configuration[x].case.network_shared_directory_pvc(the persistent volume claim (PVC) name for the case network shared directory) parameter in the custom resource must be set to the same value as the Business Automation Navigatorpvc_for_icn_pluginstoreparameter. - Prepare storage for FileNet Content Manager, by following the instructions in Configuring storage for the content services environment.
-
V20.0.0.2 Prepare storage for
Intelligent Task Prioritization.
The Intelligent Task Prioritization feature requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
- Option 1: If your environment supports dynamic provisioning:
Enable dynamic provisioning by setting
baml_configuration.intelligent_task_prioritization.storage.use_dynamic_provisioningtotrue. The trained model files will useshared_configuration.storage_configuration.sc_fast_file_storage_classnamein the PV and the log file will useshared_configuration.storage_configuration.sc_medium_file_storage_classnamein the PV. - Option 2: If your environment does not support dynamic provisioning:
Disable dynamic provisioning by setting
baml_configuration.intelligent_task_prioritization.storage.use_dynamic_provisioningtofalse. Then, create two PVs and PVCs manually and setbaml_configuration.intelligent_task_prioritization.storage.existing_pvc_for_logstoreto the name of the log PVC andbaml_configuration.intelligent_task_prioritization.storage.existing_pvc_for_trained_pipelinesto the name of the trained pipelines PVC in the custom resource file.
The following example illustrates the Intelligent Task Prioritization PV and PVCs creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.-
Create related folders on an NFS server. You must grant minimal privileges to the NFS server.
Give the least privilege to the mounted directories using the following commands:
mkdir -p NFS_storage_directory/baml-itp/logstore mkdir -p NFS_storage_directory/baml-itp/trained-pipelines chown -R :65534 NFS_storage_directory/baml-itp/logstore chmod -R g+rw NFS_storage_directory/baml-itp/logstore chown -R :65534 NFS_storage_directory/baml-itp/trained-pipelines chmod -R g+rw NFS_storage_directory/baml-itp/trained-pipelines -
Create the PVs required by Intelligent Task Prioritization by saving the following YAML files on the OpenShift master node and then running the
oc apply -f YAML_file_namecommand on each of the files in the following order:1. baml-itp-logstore-pv.yaml
whereapiVersion: v1 kind: PersistentVolume metadata: name: baml-itp-logstore-pv spec: storageClassName: baml-itp-logstore-pv accessModes: - ReadWriteMany capacity: storage: 10Gi nfs: path: NFS_storage_directory/baml-itp/logstore server: NFS_server_IP persistentVolumeReclaimPolicy: RecycleNFS_storage_directoryis the storage folder on your NFS serverNFS_server_IPis the IP address of your NFS server
2. baml-itp-logstore-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baml-itp-logstore-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: baml-itp-logstore-pv volumeName: baml-itp-logstore-pv status: accessModes: - ReadWriteMany capacity: storage: 10Gi3. baml-itp-trained-pipelines-pv.yaml
whereapiVersion: v1 kind: PersistentVolume metadata: name: baml-itp-trained-pipelines-pv spec: storageClassName: baml-itp-trained-pipelines-pv accessModes: - ReadWriteMany capacity: storage: 10Gi nfs: path: NFS_storage_directory/baml-itp/trained-pipelines server: NFS_server_IP persistentVolumeReclaimPolicy: RecycleNFS_storage_directoryis the storage folder on your NFS serverNFS_server_IPis the IP address of your NFS server
4. baml-itp-trained-pipelines-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baml-itp-trained-pipelines-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: baml-itp-trained-pipelines-pv volumeName: baml-itp-trained-pipelines-pv status: accessModes: - ReadWriteMany capacity: storage: 10Gi
- Option 1: If your environment supports dynamic provisioning:
-
V20.0.0.2 Prepare storage for
Workforce Insights.
The Workforce Insights feature requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
- Option 1: If your environment supports dynamic provisioning:
Enable dynamic provisioning by setting
baml_configuration.workforce_insights.storage.use_dynamic_provisioningtotrue. The log file will useshared_configuration.storage_configuration.sc_medium_file_storage_classnamein the PV. - Option 2: If your environment does not support dynamic provisioning:
Disable dynamic provisioning by setting
baml_configuration.workforce_insights.storage.use_dynamic_provisioningtofalse. Then, create two PVs and PVCs manually and setbaml_configuration.workforce_insights.storage.existing_pvc_for_logstoreto the name of the log PVC in the custom resource file.
The following example illustrates the Workforce Insights PV and PVCs creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.-
Create related folders on an NFS server. You must grant minimal privileges to the NFS server.
Give the least privilege to the mounted directories using the following commands:
mkdir -p NFS_storage_directory/baml-wfi/logstore chown -R :65534 NFS_storage_directory/baml-wfi/logstore chmod -R g+rw NFS_storage_directory/baml-wfi/logstore -
Create the PVs required by Workforce Insights by saving the following YAML files on the OpenShift master node and then running the
oc apply -f YAML_file_namecommand on each of the files in the following order:1. baml-wfi-logstore-pv.yaml
whereapiVersion: v1 kind: PersistentVolume metadata: name: baml-wfi-logstore-pv spec: storageClassName: baml-wfi-logstore-pv accessModes: - ReadWriteMany capacity: storage: 10Gi nfs: path: NFS_storage_directory/baml-wfi/logstore server: NFS_server_IP persistentVolumeReclaimPolicy: RecycleNFS_storage_directoryis the storage folder on your NFS serverNFS_server_IPis the IP address of your NFS server
2. baml-wfi-logstore-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baml-wfi-logstore-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: baml-wfi-logstore-pv volumeName: baml-wfi-logstore-pv status: accessModes: - ReadWriteMany capacity: storage: 10Gi
- Option 1: If your environment supports dynamic provisioning:
-
Prepare storage for the Java Message Service (JMS).
The JMS component requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
- Option 1: If your environment supports dynamic provisioning:
Enable dynamic provisioning by setting
baw_configuration[x].jms.storage.use_dynamic_provisioningtotrueand provide the storage class name ofbaw_configuration[x].jms.storage.storage_classin the custom resource file. - Option 2: If your environment does not support dynamic provisioning:
Disable dynamic provisioning by setting
baw_configuration[x].jms.storage.use_dynamic_provisioningtofalse. Then, create a PV manually and setbaw_configuration[x].jms.storage.storage_classin the custom resource file to the value of thestorageClassNameproperty of your PV.
The following example illustrates the JMS PV creation procedure that uses NFS. An existing NFS server is required before you create PVs.-
Create related folders on an NFS server. You must grant minimal privileges to the NFS server.
Give the least privilege to the mounted directories using the following commands:
mkdir -p NFS_storage_directory/jms chown -R :65534 NFS_storage_directory/jms chmod -R g+rw NFS_storage_directory/jms -
Create the PV required by JMS by creating a YAML file called
jms-pv.yamlon the OpenShift master node with the following contents:
whereapiVersion: v1 kind: PersistentVolume metadata: name: jms-pv-baw spec: storageClassName: "jms-storage-class" accessModes: - ReadWriteOnce capacity: storage: 2Gi nfs: path: NFS_storage_directory/jms server: NFS_server_IP persistentVolumeReclaimPolicy: RecycleNFS_storage_directoryis the storage folder on your NFS serveraccessModesis set to the same value as thebaw_configuration[x].jms.storage.access_modesproperty in the custom resource configuration fileNFS_server_IPis the IP address of your NFS server
-
Run the following command:
oc apply -f jms-pv.yaml
- Option 1: If your environment supports dynamic provisioning:
-
Prepare storage for IBM Process Federation
Server.
The Process Federation Server component requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
- Option 1: If your environment supports dynamic provisioning:
Enable dynamic provisioning by setting
pfs_configuration.logs.storage.use_dynamic_provisioningtotrueand provide the storage class name ofpfs_configuration.logs.storage.storage_classin the custom resource file. - Option 2: If your environment does not support dynamic provisioning:
Disable dynamic provisioning by setting
pfs_configuration.logs.storage.use_dynamic_provisioningtofalse. Then, create a PV manually and setpfs_configuration.logs.storage.storage_classin the custom resource file to the value of thestorageClassNameproperty of your PV.
The following example illustrates the Process Federation Server PV creation procedure that uses NFS. An existing NFS server is required before you create PVs.-
Create related folders on an NFS server. You must grant minimal privileges to the NFS server.
Give the least privilege to the mounted directories using the following commands:
V20.0.0.2
mkdir -p NFS_storage_directory/pfs-logs chown -R :65534 NFS_storage_directory/pfs-logs chmod -R g+rw NFS_storage_directory/pfs-logsV20.0.0.1mkdir -p NFS_storage_directory/pfs-logs-0 chown -R :65534 NFS_storage_directory/pfs-logs-0 chmod -R g+rw NFS_storage_directory/pfs-logs-0 -
Create the PV required by Process Federation Server on the OpenShift master node,
V20.0.0.2 Create the
pfs-pv-pfs-logs.yamlfile with the following content:--- apiVersion: v1 kind: PersistentVolume metadata: name: pfs-logs-pv spec: storageClassName: "pfs-logs" accessModes: - ReadWriteMany capacity: storage: 10Gi nfs: path: NFS_storage_directory/pfs-logs server: NFS_server_IP persistentVolumeReclaimPolicy: Recycle --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pfs-logs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: pfs-logs volumeName: pfs-logs-pvNFS_storage_directoryis the storage folder on your NFS serverNFS_server_IPis the IP address of your NFS server
V20.0.0.1 The number of PVs to be created depends on yourpfs_configuration.replicassetting. Create the same number of PVs as replicas. Create thepfs-pv-pfs-logs-0.yamlfile with the following content:
whereapiVersion: v1 kind: PersistentVolume metadata: name: pfs-logs-0 spec: storageClassName: "pfs-logs" accessModes: - ReadWriteOnce capacity: storage: 5Gi nfs: path: NFS_storage_directory/pfs-logs-0 server: NFS_server_IP persistentVolumeReclaimPolicy: RecycleNFS_storage_directoryis the storage folder on your NFS serverNFS_server_IPis the IP address of your NFS server
-
Run the following command:
V20.0.0.2
oc apply -f pfs-pv-pvc-logs.yamlV20.0.0.1oc apply -f pfs-pv-pfs-logs-0.yaml
- Option 1: If your environment supports dynamic provisioning:
-
Prepare storage for Elasticsearch.
The Elasticsearch component on Process Federation Server requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
- Option 1: If your environment supports dynamic provisioning:
Enable dynamic provisioning by setting
elasticsearch_configuration.storage.use_dynamic_provisioningtotrueand provide the storage class name ofelasticsearch_configuration.storage.storage_classin the custom resource file. - Option 2: If your environment does not support dynamic provisioning:
Disable dynamic provisioning by setting
elasticsearch_configuration.storage.use_dynamic_provisioningtofalse. Then, create a PV manually and setelasticsearch_configuration.storage.storage_classin the custom resource file to the value of thestorageClassNameproperty of your PV.
The following example illustrates the Elasticsearch PV creation procedure that uses NFS. An existing NFS server is required before you create PVs.-
Create related folders on an NFS server. You must grant minimal privileges to the NFS server.
Give the least privilege to the mounted directories using the following commands:
mkdir -p NFS_storage_directory/pfs-es-0 chown -R :65534 NFS_storage_directory/pfs-es-0 chmod -R g+rw NFS_storage_directory/pfs-es-0 -
Create the PV required by Elasticsearch by creating a YAML file called
pfs-pv-pfs-es-0.yamlon the OpenShift master node with the following contents:
whereapiVersion: v1 kind: PersistentVolume metadata: name: pfs-es-0 spec: storageClassName: "pfs-es" accessModes: - ReadWriteOnce capacity: storage: 10Gi nfs: path: NFS_storage_directory/pfs-es-0 server: NFS_server_IP persistentVolumeReclaimPolicy: RecycleNFS_storage_directoryis the storage folder on your NFS serverNFS_server_IPis the IP address of your NFS server- The numbers of PVs to be created depends on your
elasticsearch_configuration. replicassetting. Create the same number of PVs as replicas.
-
Run the following command:
oc apply -f pfs-pv-pfs-es-0.yaml
- Option 1: If your environment supports dynamic provisioning:
-
Prepare storage for IBM Business Automation
Workflow.
The IBM Business Automation Workflow component requires a PV to be created before you can deploy. You have the following options, depending on whether your Kubernetes environment supports dynamic provisioning.
- Option 1: If your environment supports dynamic provisioning:
Enable dynamic provisioning by setting
baw_configuration.storage.use_dynamic_provisioningtotrue. The dump file will useshared_configuration.storage_configuration.sc_slow_file_storage_classnamein the PV and the log file will useshared_configuration.storage_configuration.sc_medium_file_storage_classnamein the PV. - Option 2: If your environment does not support dynamic provisioning:
Disable dynamic provisioning by setting
baw_configuration.storage.use_dynamic_provisioningtofalse. Then, create the required PVs and PVCs manually and setbaw_configuration.storage.existing_pvc_for_logstoreto the name of the log PVC andbaw_configuration.storage.existing_pvc_for_dumpstoreto the name of the dump PVC in the custom resource file. V20.0.0.2 Also setbaw_configuration.storage.existing_pvc_for_filestoreto the name of the file PVC.
The following example illustrates the IBM Business Automation Workflow PV and PVC creation procedure that uses NFS. An existing NFS server is required before you create PVs and PVCs.-
Create related folders on an NFS server. You must grant minimal privileges to the NFS server.
Give the least privilege to the mounted directories using the following commands:
mkdir -p NFS_storage_directory/baw/logstore mkdir -p NFS_storage_directory/baw/dumpstore V20.0.0.2 mkdir -p NFS_storage_directory/baw/filestore chown -R :65534 NFS_storage_directory/baw/logstore chmod -R g+rw NFS_storage_directory/baw/logstore chown -R :65534 NFS_storage_directory/baw/dumpstore chmod -R g+rw NFS_storage_directory/baw/dumpstore V20.0.0.2 chown -R :65534 NFS_storage_directory/baw/filestore V20.0.0.2 chmod -R g+rw NFS_storage_directory/baw/filestore -
Create the IBM Business Automation
Workflow required PVs by saving the following YAML files on the OpenShift master node and then running the
oc apply -f YAML_file_namecommand on the files in the following order:1. baw-logstore-pv.yaml
whereapiVersion: v1 kind: PersistentVolume metadata: name: baw-logstore-pv spec: storageClassName: baw-logstore-pv accessModes: - ReadWriteMany capacity: storage: 10Gi nfs: path: NFS_storage_directory/baw/logstore server: NFS_server_IP persistentVolumeReclaimPolicy: RecycleNFS_storage_directoryis the storage folder on your NFS serverNFS_server_IPis the IP address of your NFS server
2. baw-logstore-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baw-logstore-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: baw-logstore-pv volumeName: baw-logstore-pv status: accessModes: - ReadWriteMany capacity: storage: 10Gi3. baw-dumpstore-pv.yamlapiVersion: v1 kind: PersistentVolume metadata: name: baw-dumpstore-pv spec: storageClassName: baw-dumpstore-pv accessModes: - ReadWriteMany capacity: storage: 10Gi nfs: path: NFS_storage_directory/baw/dumpstore server: NFS_server_IP persistentVolumeReclaimPolicy: Recycle4. baw-dumpstore-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baw-dumpstore-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: baw-dumpstore-pv volumeName: baw-dumpstore-pv status: accessModes: - ReadWriteMany capacity: storage: 10GiV20.0.0.2 5. baw-filestore-pv.yamlapiVersion: v1 kind: PersistentVolume metadata: name: baw-filestore-pv spec: storageClassName: baw-filestore-pv accessModes: - ReadWriteMany capacity: storage: 10Gi nfs: path: NFS_storage_directory/baw/filestore server: NFS_server_IP persistentVolumeReclaimPolicy: RecycleV20.0.0.2 6. baw-filestore-pvc.yamlapiVersion: v1 kind: PersistentVolumeClaim metadata: name: baw-filestore-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: baw-filestore-pv volumeName: baw-filestore-pv status: accessModes: - ReadWriteMany capacity: storage: 10Gi
- Option 1: If your environment supports dynamic provisioning: