Table of contents

Setting up NFS storage

By default, NFS does not support dynamic storage provisioning. If you plan to use Cloud Pak for Data for persistent storage, you must set up your NFS storage before you install Cloud Pak for Data.

Supported storage topology

If you use NFS storage, you can use one of following cluster configurations:

  • NFS on a dedicated node in the same VLAN as the cluster (recommended)
  • An external NFS server

    If you select this option, configure the server based on your availability requirements and ensure that you have a sufficiently fast network connection (at least 1 GB) to reduce latency and ensure performance.

Configuration requirements

Ensure that the following statements are true:

  • All of the nodes in the cluster must have access to mount the NFS server.
  • All of the nodes in the cluster must have read/write access to the NFS server.
  • Containerized processes must have read/write access to the NFS server.
    Important: Containerized processes create files that are owned by various UIDs. (In Cloud Pak for Data, most services use long UIDs based on the Red Hat® OpenShift® Container Platform project where they are installed.) If you restrict access to the NFS served to specific UIDs, you might encounter errors when installing or running Cloud Pak for Data.

    For information on determining which UIDs are used, see Service UIDs.

  • If you use NFS as the storage for a database service, ensure that the storage has sufficient throughput. For details, see the appropriate topic for your environment:

Setting the NFS export

Ensure that the NFS export is set to no_root_squash.

Note: If you are installing Cloud Pak for Data from the IBM® Cloud catalog, the NFS export is automatically set to no_root_squash.

However, if you are manually installing Cloud Pak for Data on IBM Cloud, you must follow the guidance in Implementing no_root_squash for NFS.

Configuring dynamic storage

By default, Red Hat OpenShift does not include a provisioner plug-in to create an NFS storage class. To dynamically provision NFS storage, use the Kubernetes NFS-Client Provisioner, which is available from the Kubernetes SIGs organization on GitHub.

Permissions you need for this task
You must be a cluster administrator.
Important: The following steps assume you have an existing NFS server. Ensure that you know how to connect to your NFS server. At a minimum, you must have the hostname of the server.

To configure dynamic storage:

  1. Ensure that your NFS server is accessible from your Red Hat OpenShift Container Platform cluster.
  2. Clone the https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner repository.
  3. Open a bash shell and change to the deploy directory of the repository.
  4. Log in to your Red Hat OpenShift Container Platform cluster as a user with sufficient permissions to complete the task:
    oc login OpenShift_URL:port
  5. Authorize the provisioner by running the following commands.
    1. Create the required role based access control.
      Important: If you plan to deploy the NFS provisioner to a project other than the default project, you must replace each instance of default in the rbac.yaml file before you run this command.
      oc create -f rbac.yaml
    2. Add the nfs-client-provisioner security context constraint to the system service account.

      If you plan to deploy the NFS provisioner to a project other than the default project, replace default in the following command.

      oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:default:nfs-client-provisioner
  6. Edit the deployment.yaml file in the deploy directory to specify the following information:
    • The project (namespace) where the NFS provisioner is deployed.
    • The image that corresponds to your Red Hat OpenShift Container Platform architecture:
      • x86-64: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:amd64-linux-v4.0.2
      • Power®: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:ppc64le-linux-v4.0.2
      • s390x: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:s390x-linux-v4.0.2
    • The hostname of your NFS server.
    • The path where you want to dynamically provision storage on your NFS server.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner
      labels:
        app: nfs-client-provisioner
      namespace: default         # Specify the namespace where the NFS provisioner is deployed
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: nfs-provisioner-image    # Specify the appropriate image based on your OpenShift architecture 
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: nfs-storage
                - name: NFS_SERVER
                  value: MyNFSHostname        # Specify the host name of your NFS server
                - name: NFS_PATH
                  value: /nfs/cpshare/        # Specify the path where you want to provision storage
          volumes:
            - name: nfs-client-root
              nfs:
                server: MyNFSHostname         # Specify the host name of your NFS server
                path: /nfs/cpshare/           # Specify the path where you want to provision storage
  7. Deploy the NFS provisioner:
    oc create -f deployment.yaml
  8. Edit the class.yaml file to specify the names of the storage classes that you want to create. The following example includes the recommended managed-nfs-storage storage class:
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage        # Recommended storage class name  
    provisioner: nfs-client-provisioner        # This name must match the value you specified in the deployment.yaml
    parameters:
      archiveOnDelete: "false"

    For a complete list of parameters, see Deploying your storage class in the NFS provisioner documentation.

  9. Create the storage class:
    oc create -f class.yaml
  10. Verify that the NFS provisioner is running correctly:
    1. Create a test persistent volume claim (PVC).
      Note: The test-claim.yaml file uses the managed-nfs-storage storage class.
      oc create -f test-claim.yaml -f test-pod.yaml
    2. On your NFS server, verify that the share directory, which you specified in the deployment.yaml file, contains a file called SUCCESS.
    3. Remove the test PVC:
      oc delete -f test-pod.yaml -f test-claim.yaml