Setting up PersistentVolume(s)

A PersistentVolume (PV) is a piece of storage in the cluster that is provisioned by an administrator or dynamically provisioned using storage classes.

You need to create a PersistentVolume to provide the required environment-specific external resources like database drivers, JCE policy files, Key Stores, and Trust Stores to enable SSL connections to the database server, MQ server, and so on.
This volume is further referenced as resources volume.

You can redirect the application logs to console, which is the recommended option, or written to a file system or a storage location outside the application containers, in which case, you need to create an additional PersistentVolume for logs.
This volume is further referenced as logs volume.

Similarly, you must map additional volumes to externalize data generated by the application during runtime, for example, documents.
To accommodate additional PersistentVolumes, the application Helm chart values.yaml provides extension points through extraVolumes and extraVolumeMounts to extend the deployment and to create additional PersistentVolume claims and volume mounts matching the additional PersistentVolumes.

For more information, see:
Prerequisites for application deployment on a container platform:
  1. Create a PersistentVolume for resources with Access mode as ReadOnlyMany and Storage less than or equal to 100Mi (can vary based on set of jars/files to be provided externally).
  2. If applicable, Create a PersistentVolume for logs with Access mode as ReadWriteMany and Storage less than or equal to 1Gi (can vary based on the log files usage and purge intervals).
  3. If applicable, Create a PersistentVolume(s) for additional values with Access mode as suitable and storage limit as suitable for the applicable use case(s).

When creating PersistentVolumes, you need to make a note of the storage class and metadata labels, which is required to configure the respective PersistentVolume claim’s storage class and label selector in the Helm chart configuration (Refer configuration section for more details) so that the claims are bound to the PersistentVolumes based on the match.

If PersistentVolumesClaim is pre-created, then its name can be directly configured against the predefinedPersistentVolumeClaimName configuration for the respective PersistentVolumeClaim configuration section in the Helm chart configuration, viz.appResourcesPVC, appLogsPVC, appDocumentsPVC and extraPVCs. For more information, see Configuring the Certified Container. If this configuration is specified, the application deployment will not create the respective persistent volume claim and will bind the application pods to the configured volume claims.

If you want to set up an independent PersistentVolumeClaim for each pod replica, you can set the enableVolumeClaimPerPod flag to true for the applicable PersistentVolumeClaim  configuration section in the Helm chart configuration. For more information, see Configuring the Certified Container. If this configuration is enabled, a persistent volume claim and persistent volume are dynamically provisioned for each pod replica in the ASI, AC, and API stateful sets.
Note: If enableVolumeClaimPerPod is enabled for documents PVC, then the external purge job cannot be configured as it will be unable to access the document PVCs, which are mapped to individual pods in RWO access mode, to purge the payload documents. You can continue to use the internal purge service.
A few sample PersistentVolume and PersistentVolumeClaim templates are bundled in the Helm charts and are available under ./ibm-b2bi-prod (or ibm-sfg-prod)/ibm_cloud_pak/pak_extensions/pre-install/volume/resources-pv( or logs-pv or documents-pv).yaml and ./ibm-b2bi-prod (or ibm-sfg-prod)/ibm_cloud_pak/pak_extensions/pre-install/volumeclaim/sample-pvc.yaml. This is only for reference and can be modified as per the selected storage option/location/size and other available platform options.
You can use the container platform command line or UI to create or update PersistentVolumes.
You can use the below command to create a new PersistentVolume after the template is defined as below:
Template:
kind: PersistentVolumeapiVersion: v1metadata: name: resources-pv labels: intent: resourcesspec:
storageClassName: "standard" capacity: storage: 500Mi accessModes:- ReadOnlyMany nfs: server: 
9.37.37.47 path: /mnt/nfs/data/b2bi_resources/
Command:
Kubernetes
Kubectl create -f /path/to/pv.yaml
Openshift
oc create -f /path/to/pv.yaml
Template:
apiVersion: v1
kind: PersistentVolumeClaim metadata: name: <PVC_name> spec: storageClassName: "standard"
selector: matchExpressions: - {key: intent, operator: In, values: [<PVC_name>]} accessModes: 
[ "ReadOnlyMany" ] resources: requests: storage: "500Mi"
Command:
Kubernetes
Kubectl create -f /path/to/sample-pvc.yaml
Openshift
oc create -f /path/to/sample-pvc.yaml                       
                      
            
Note: You need to provide the application containers appropriate access to the shared storage mapped through PersistentVolumes using fsGroup or supplemental group IDs, which are configurable using values.yaml.