Creating storage for Data Persistence

The containers are ephemeral entity, all the data inside the container will be lost when the containers are destroyed/removed, so data must be saved to Storage Volume using Persistent Volume. Persistent volume is recommended for Connect:Direct® for UNIX deployment files. A Persistent Volume (PV) is a piece of storage in the cluster that is provisioned by an administrator or dynamically provisioned using storage classes. For more information see:
As a prerequisite, create a persistent volume supported via NFS, and host path mounted across all worker nodes. IBM Certified Container Software for CDU supports:
  • Dynamic Provisioning using storage class
  • Pre-created Persistent Volume
  • Pre-created Persistent Volume Claim
  • The only supported access mode is `ReadWriteOnce`

Dynamic Provisioning

Dynamic provisioning is supported using storage classes. To enable dynamic provisioning use following configuration for helm chart:
  • persistence.useDynamicProvisioning- It must be set to true. By default, it is set to false, which means dynamic provisioning is disabled.
  • pvClaim.storageClassName- The storage class is blank by default. Update this parameter value using valid storage class. Consult your cluster administrator for available storage class as required by this chart.
  • secret.certSecretName- Specify the certificate secret required for Secure plus configuration or LDAP support. Update this parameter with valid certificate secret. Refer Creating secret for more information.

Non-Dynamic Provisioning

Non-Dynamic Provisioning is supported using pre-created Persistent Volume and pre-created Persistent Volume Claim. The Storage Volume should have Connect:Direct for UNIX secure plus certificate files to be used for installation. Create a directory named "CDFILES" inside mount path and place certificate files in the created directory. Similarly, the LDAP certificates should be placed in same directory.

Using pre-created Persistent Volume- When creating Persistent Volume, make a note of the storage class and metadata labels, that are required to configure Persistent Volume Claim's storage class and label selector during deployment. This ensures that the claims are bound to Persistent Volume based on label match. These labels can be passed to helm chart either by --set flag or custom values.yaml file. The parameters defined invalues.yaml for label name and its value are pvClaim.selector.label and pvClaim.selector.value respectively.

Refer below yaml templates for Persistent Volume creation. Customize as per your requirement. Example: Create Persistent volume using NFS server
kind: PersistentVolume
apiVersion: v1
metadata:
  name: <persistent volume name> 
  labels:
    app.kubernetes.io/name: <persistent volume name>
    app.kubernetes.io/instance: <release name>
    app.kubernetes.io/managed-by: <service name>
    helm.sh/chart: <chart name>
    release: <release name>
    purpose: cdconfig
spec:
  storageClassName: <storage classname>
  capacity:
    storage: <storage size>
  accessModes:
    - ReadWriteOnce
  nfs:
    server: <NFS server IP address>
    path: <mount path>
Example: Create Persistent volume using Host Path
kind: PersistentVolume
apiVersion: v1
metadata:
  name: <persistent volume name>
  labels:
    app.kubernetes.io/name: <persistent volume name>
    app.kubernetes.io/instance: <release name>
    app.kubernetes.io/managed-by: <service name>
    helm.sh/chart: <chart name>
    release: <release name>
    purpose: cdconfig
spec:
  storageClassName: <storage classname>
  capacity:
    storage: <storage size>
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: <mount path>
Invoke the following command to create a Persistent Volume:
Kubernetes:
kubectl create -f <peristentVolume yaml file>
OpenShift:
oc create -f <peristentVolume yaml file>

Using pre-created Persistent Volume Claim (PVC)- The existing PVC can also be used for deployment. The PV for PVC should have the certificate files as required for Connect:Direct for UNIX secure plus or LDAP TLS configuration. The parameter for pre-created PVC is pvClaim.existingClaimName. One should pass a valid PVC name to this parameter else deployment would fail.

Apart from required Persistent Volume, you can bind extra storage mounts using the parameters provided in values.yaml. These parameters are extraVolume and extraVolumeMounts. This can be a host path or a NFS type.

The deployment mounts following configuration/resource directories on the Persistent Volume -
  • <install_dir>/work
  • <install_dir>/ndm/security
  • <install_dir>/ndm/cfg
  • <install_dir>/ndm/secure+
  • <install_dir>/process
  • <install_dir>/file_agent/config
  • <install_dir>/file_agent/log
When deployment is upgraded or pod is recreated in Kubernetes based cluster then, only the data of above directories are saved/persisted on Persistent Volume.

Setting permission on storage

When shared storage is mounted on a container, it is mounted with same POSIX ownership and permission present on exported NFS directory. The mounted directories on container may not have correct owner and permission needed to perform execution of scripts/binaries or writing to them. This situation can be handled as below -
  • Option A: The easiest and undesirable solution is to have open permissions on the NFS exported directories.
     chmod -R 777 <path-to-directory>
  • Option B: Alternatively, the permissions can be controlled at group level leveraging the supplementalGroups and fsGroup setting. For example - if we want to add GID to supplementalGroups or fsGroup, it can be done using storageSecurity.supplementalGroups or storageSecurity.fsGroup.
Apart from above recommendation, during deployment, a default admin user cduser with group cduser is created. The default UID and GID of cduser is 45678.

Example:

If you have a user, host-user having UID and GID set to 2000 and 4000 respectively on the host system. There are 'upload' (for files to be sent) and 'download' (for files to be received) directories created by this user. These directories can be mounted on the CDU container so that these directories are available inside container.

In order to give cduser access to these directories inside the container, the UID and GID of cduser can be set to `2000` and `4000` respectively. Similarly, you can set the UID and GID of `appuser` which is a non-admin user of Connect:Direct for Unix running inside container. The user `appuser` is created only if you have passed its password to be set inside container using cdai_appuser_pwd parameter in cd_param_file.

The UID and GID of cduser can be set to some real user on the host system. By setting the same UID and GID as on the host system, the host system user would have suitable permissions on the files present on the host path so that this user can be used to edit any setting or configuration files related to Connect:Direct for Unix running inside container.

Root Squash NFS support

Root squash NFS is secure NFS share when root privileges are shrinked similar to unprivileged user. Also, this user is mapped to nfsnobody or nobody user on the system. So, you cannot perform operations like changing the ownership of any files/directories.

Connect:Direct for UNIX
helm chart can be deployed on root squash NFS. Since, the ownership of files/directories mounted in container would be mounted as nfsnobody or nobody. The POSIX group ID of the root squash NFS share should be added to Supplemental Group list statefulset using storageSecurity.supplementalGroup in values.yam l file. Similarly any extra NFS share is mounted it proper rad/write permission can be provide to container user using supplemental group only.ups