Creating storage for Data Persistence

The containers are ephemeral entity, all the data inside the container will be lost when the containers are destroyed/removed, so data must be saved to Storage Volume using Persistent Volume. Persistent volume is recommended for Sterling Secure Proxy deployment files. A Persistent Volume (PV) is a piece of storage in the cluster that is provisioned by an administrator or dynamically provisioned using storage classes. For more information see:
As a prerequisite, create a persistent volume supported via NFS mounted across all worker nodes. IBM Certified Container Software for SSP supports:
  • Dynamic Provisioning using storage class
  • Pre-created Persistent Volume
  • Pre-created Persistent Volume Claim
  • The only supported access mode is `ReadWriteOnce`

Dynamic Provisioning

Dynamic provisioning is supported using storage classes. To enable dynamic provisioning use following configuration for helm chart:
  • persistentVolume.useDynamicProvisioning- It must be set to true. By default, it is set to false, which means dynamic provisioning is disabled.
  • persistentVolumeClaim.storageClassName- The storage class is blank by default. Update this parameter value using valid storage class. Consult your cluster administrator for available storage class as required by this chart.
  • persistentVolume.labelName and persistentVolume.labelValue- To bind specific PV to PVC. Update these parameters value using valid selector label name and value. Consult your cluster administrator for available selector name and value and if you don't want to use these parameters then set both values as blank.

Using Pre-created Persistent Volume Claim

To enable Pre-created Persistent Volume Claim use following configuration for helm chart:
  • persistentVolume.existingClaimName- Update this parameter value using existing PV claim name.

Using Pre-created Persistent Volume

When creating Persistent Volume, make a note of the storage class and metadata labels, that are required to configure Persistent Volume Claim's storage class and label selector during deployment. This ensures that the claims are bound to Persistent Volume based on label match. These labels can be passed to helm chart either by --set flag or custom values.yaml file. The parameters defined in values.yaml and its value are persistentVolume.labelName and persistentVolume.labelValue respectively.

Refer below yaml templates for Persistent Volume creation. Customize as per your requirement.

Example: Create Persistent volume using NFS server

kind: PersistentVolume
apiVersion: v1
metadata:
  name: <persistent volume name> 
  labels:
    app.kubernetes.io/name: <persistent volume name>
    app.kubernetes.io/instance: <release name>
    app.kubernetes.io/managed-by: <service name>
    helm.sh/chart: <chart name>
    release: <release name>
spec:
  storageClassName: <storage classname>
  capacity:
    storage: <storage size>
  accessModes:
    - ReadWriteOnce
  nfs:
    server: <NFS server IP address>
    path: <mount path>

Setting permission on storage

When shared storage is mounted on a container, it is mounted with same POSIX ownership and permission present on exported NFS directory. The mounted directories on container may not have correct owner and permission needed to perform execution of scripts/binaries or writing to them. This situation can be handled as below -

  • The permissions can be controlled during deployment, The default UID and GID of container user is 1000. The permissions can be controlled at group level leveraging the supplemental Group setting. For example - if you want to add GID to supplemental Group, it can be done using storageSecurity.supplementalGroupId.

    All the users who are part of the group (GID) would have suitable permissions on the files present on PV mounted directory and can execute/edit any tool/setting/configuration files.

    Also, ensure that the setgid bit is assigned on PV directory if you want to control PV directory permission using Group (storageSecurity.supplementalGroupId) else not needed.

  • To locate the setgid bit, look for an ‘s’ in the group section of the PV directory permissions, as shown in the example below:
    l ls -lrth
       drwxr-sr-x 2 root root  6 Aug  3 22:36 PV_TEST
    
  • To set the setgid bit, use the following command:
    chmod g+s <PV Mounted Directory>