Creating storage for Data Persistence
- Kubernetes - Persistent Volumes
- Red Hat OpenShift - Persistent Volume Overview
- Dynamic Provisioning using storage class
- Pre-created Persistent Volume
- Pre-created Persistent Volume Claim
- The only supported access mode is `ReadWriteOnce`
Dynamic Provisioning
- persistence.useDynamicProvisioning- It must be set to true. By default, it is set to false, which means dynamic provisioning is disabled.
- pvClaim.storageClassName- The storage class is blank by default. Update this parameter value using valid storage class. Consult your cluster administrator for available storage class as required by this chart.
- secret.certSecretName- Specify the certificate secret required for Secure plus configuration or LDAP support. Update this parameter with valid certificate secret. Refer Creating secret for more information.
Non-Dynamic Provisioning
Non-Dynamic Provisioning is supported using pre-created Persistent Volume and pre-created Persistent Volume Claim. The Storage Volume should have Connect:Direct for UNIX secure plus certificate files to be used for installation. Create a directory named "CDFILES" inside mount path and place certificate files in the created directory. Similarly, the LDAP certificates should be placed in same directory.
Using pre-created Persistent Volume- When creating Persistent Volume, make a note of the
storage class and metadata labels, that are required to configure Persistent Volume Claim's storage
class and label selector during deployment. This ensures that the claims are bound to Persistent
Volume based on label match. These labels can be passed to helm chart either by --set
flag
or custom values.yaml
file. The parameters defined
invalues.yaml
for label name and its value are
pvClaim.selector.label
and pvClaim.selector.value
respectively.
kind: PersistentVolume
apiVersion: v1
metadata:
name: <persistent volume name>
labels:
app.kubernetes.io/name: <persistent volume name>
app.kubernetes.io/instance: <release name>
app.kubernetes.io/managed-by: <service name>
helm.sh/chart: <chart name>
release: <release name>
purpose: cdconfig
spec:
storageClassName: <storage classname>
capacity:
storage: <storage size>
accessModes:
- ReadWriteOnce
nfs:
server: <NFS server IP address>
path: <mount path>
kind: PersistentVolume
apiVersion: v1
metadata:
name: <persistent volume name>
labels:
app.kubernetes.io/name: <persistent volume name>
app.kubernetes.io/instance: <release name>
app.kubernetes.io/managed-by: <service name>
helm.sh/chart: <chart name>
release: <release name>
purpose: cdconfig
spec:
storageClassName: <storage classname>
capacity:
storage: <storage size>
accessModes:
- ReadWriteOnce
hostPath:
path: <mount path>
kubectl create -f <peristentVolume yaml file>
oc create -f <peristentVolume yaml file>
Using pre-created Persistent Volume Claim (PVC)- The existing PVC can also be used for
deployment. The PV for PVC should have the certificate files as required for Connect:Direct for UNIX
secure plus or LDAP TLS configuration. The parameter for pre-created PVC is
pvClaim.existingClaimName
. One should pass a valid PVC name to this parameter else
deployment would fail.
Apart from required Persistent Volume, you can bind extra storage mounts using the parameters
provided in values.yaml
. These parameters are extraVolume and extraVolumeMounts.
This can be a host path or a NFS type.
- <install_dir>/work
- <install_dir>/ndm/security
- <install_dir>/ndm/cfg
- <install_dir>/ndm/secure+
- <install_dir>/process
- <install_dir>/file_agent/config
- <install_dir>/file_agent/log
Setting permission on storage
- Option A: The easiest and undesirable solution is to have open permissions on the NFS
exported directories.
chmod -R 777 <path-to-directory>
- Option B: Alternatively, the permissions can be controlled at group level leveraging the supplementalGroups and fsGroup setting. For example - if we want to add GID to supplementalGroups or fsGroup, it can be done using storageSecurity.supplementalGroups or storageSecurity.fsGroup.
Example:
If you have a user, host-user having UID and GID set to 2000 and 4000 respectively on the host system. There are 'upload' (for files to be sent) and 'download' (for files to be received) directories created by this user. These directories can be mounted on the CDU container so that these directories are available inside container.
In order to give cduser access to these directories inside the container, the UID and GID of cduser can be set to `2000` and `4000` respectively. Similarly, you can set the UID and GID of `appuser` which is a non-admin user of Connect:Direct for Unix running inside container. The user `appuser` is created only if you have passed its password to be set inside container using cdai_appuser_pwd parameter in cd_param_file.
The UID and GID of cduser can be set to some real user on the host system. By setting the same UID and GID as on the host system, the host system user would have suitable permissions on the files present on the host path so that this user can be used to edit any setting or configuration files related to Connect:Direct for Unix running inside container.
Root Squash NFS support
Connect:Direct for UNIX helm chart can be deployed on root squash NFS. Since, the ownership of files/directories mounted in container would be mounted as nfsnobody or nobody. The POSIX group ID of the root squash NFS share should be added to Supplemental Group list statefulset using storageSecurity.supplementalGroup in values.yam l file. Similarly any extra NFS share is mounted it proper rad/write permission can be provide to container user using supplemental group only.ups