Setting up an NFS mount in DataStage pods by using a persistent volume (PV)

About this task

In the legacy DataStage® environment, it is a common practice to use a Network File System (NFS) mount to pass data files such as CSV and XML between DataStage and source or target systems. For DataStage in Cloud Pak for Data, you can set up an NFS mount in DataStage pods by using a PV.

Complete the following steps to set up an NFS mount in DataStage pods by using a PV.

Procedure

  1. Mount the NFS volume on worker nodes.
    1. Log in to the worker node and declare the mount details in the mount file. See the following example.
      [Unit]
      Description = Mount NFS share
      
      [Mount]
      What=10.176.115.194:/share
      Where=/var/mnt/nfs
      Type=nfs
      
      [Install]
      WantedBy = multi-user.target
    2. Create a /var/mnt/nfs directory. Then, run the systemctl command to enable the mount. See the following example.
      systemctl enable --now /etc/systemd/system/var-mnt-nfs.mount
  2. Allow access from a pod to the remote NFS.
    By default, SELinux does not allow writing from a pod to a remote NFS server. Run the following command to allow write access by pods.
    setsebool -P virt_use_nfs on
  3. Repeat steps 1.a through 2 on all worker nodes.
  4. Add a security context constraint.

    The DataStage pods run with the service account wkc-iis-sa. This service account does not have a permission to mount local directories. Local hosts are protected. You must add the hostmount-anyuid security context constraint to this service account.

    oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:zen:wkc-iis-sa
    Note: The cluster administrator must run this command.
  5. Create a PV.
    1. Create a file nfs_pv.yaml:
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: nfsshare-pv-volume 
        labels:
          type: local
      spec:
        storageClassName: manual 
        capacity:
          storage: 5Gi
        accessModes:
          - ReadWriteOnce 
        persistentVolumeReclaimPolicy: Retain
        hostPath:
          path: "/var/mnt/nfs"
    2. Create the PV:
      oc create -f nfs_pv.yaml
  6. Create a persistent volume claim (PVC).
    1. Create a file nfs_pvc.yaml:
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: nfsshare-pvc-volume
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: manual
      
    2. Create the PVC:
      oc create -f nfs_pvc.yaml -n <namespace>
  7. Update the StatefulSet specification.
    1. Use the OpenShift® Web Console or the oc command to update the DataStage StatefulSet.
      You must update two of them, is-en-conductor and ds-engine-compute. If you use the OpenShift Web Console, search those StatefulSets. If you use the oc command, run the following command to edit the StatefulSet.
      oc edit sts <is-en-conductor or ds-engine-compute>
    2. Find the volumeMounts element and declare the mount location. The mount path and the volume name can be anything.
      volumeMounts:
        - mountPath: /mnt/dedicated_vol/Engine
          name: engine-dedicated-volume
        - mountPath: /opt/ia/custom
          name: engine-dedicated-volume
          subPath: is-en-conductor-0/ia/custom
        - mountPath: /share      ## <-- here
          name: nfs-data-share   ## <-- here
      
    3. Find the volumes element and add the volume declaration. See the element that starts with "- name: nfs-data-share" in the following example.
      volumes:
        - name: engine-dedicated-volume
          persistentVolumeClaim:
            claimName: 0072-iis-en-dedicated-pvc
        - name: nfs-data-share    ## <- here and below
          persistentVolumeClaim:  
            claimName: nfsshare-pvc-volume 
      
  8. Wait for the conductor and compute pods to be in the running state. Run the following command to check the state of the pods:
    oc get pod | egrep "ds-engine-compute|is-en-conductor"