Setting up dynamic provisioning

NFS does not support dynamic storage provisioning by default, and Red Hat® OpenShift® does not include a provisioner plug-in to create an NFS storage class. Therefore, you must set up dynamic storage provisioning on your NFS server.

Installation phase
You are not here. Setting up a client workstation
You are not here. Collecting required information
You are here icon. Preparing your cluster
You are not here. Installing the Cloud Pak for Data platform and services
Who needs to complete this task?
A cluster administrator must complete this task.
When do you need to complete this task?
If you plan to use NFS storage, you must set up dynamic provisioning before you install Cloud Pak for Data.

Before you begin

Best practice: You can run many of the commands in this task exactly as written if you set up environment variables for your installation. For instructions, see Setting up installation environment variables.

Ensure that you source the environment variables before you run the commands in this task.

If you are installing any of the following services on Cloud Pak for Data, ensure your NFS server is configured before you set up dynamic provisioning:
  • Db2®
  • Db2 Warehouse
  • Watson™ Knowledge Catalog
  • OpenPages®
  • Big SQL
  • Watson Query

About this task

The steps in this procedure use the Kubernetes NFS-Client Provisioner (from the Kubernetes SIGs organization) to set up dynamic provisioning with NFS storage.

Important: You must have an existing NFS server to complete this task. Ensure that you know how to connect to your NFS server. At a minimum, you must have the hostname of the server.

Your NFS server must be accessible from your Red Hat OpenShift Container Platform cluster.

Mirroring the provisioner images to a private container registry

If you pull images from a private container registry, mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry. Complete the appropriate task for your environment:

Mirroring the provisioner images directly to a private container registry

To mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry:

  1. Log in to your private container registry:
    cpd-cli manage login-private-registry \
    ${PRIVATE_REGISTRY_LOCATION} \
    ${PRIVATE_REGISTRY_PUSH_USER} \
    ${PRIVATE_REGISTRY_PUSH_PASSWORD}

    If your private registry is not secured, see cpd-cli manage login-private-registry for additional options.

  2. Mirror the images to the private container registry:
    cpd-cli manage mirror-nfs-provisioner \
    --target_registry=${PRIVATE_REGISTRY_LOCATION} \
    --source_registry=k8s.gcr.io/sig-storage


Mirroring the provisioner images using an intermediary container registry

To mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry:

  1. Mirror the images to the intermediary container registry:
    cpd-cli manage mirror-nfs-provisioner \
    --target_registry=127.0.0.1:12443 \
    --source_registry=k8s.gcr.io/sig-storage
  2. Move the intermediary container registry behind the firewall.
  3. Log in to your private container registry:
    cpd-cli manage login-private-registry \
    ${PRIVATE_REGISTRY_LOCATION} \
    ${PRIVATE_REGISTRY_PUSH_USER} \
    ${PRIVATE_REGISTRY_PUSH_PASSWORD}

    If your private registry is not secured, see cpd-cli manage login-private-registry for additional options.

  4. Mirror the images to the private container registry:
    cpd-cli manage mirror-nfs-provisioner \
    --target_registry=${PRIVATE_REGISTRY_LOCATION} \
    --source_registry=127.0.0.1:12443

Configuring dynamic storage

To configure dynamic storage:

  1. Run the cpd-cli manage login-to-ocp command to log in to the cluster as a user with sufficient permissions to complete this task. For example:
    cpd-cli manage login-to-ocp \
    --username=${OCP_USERNAME} \
    --password=${OCP_PASSWORD} \
    --server=${OCP_URL}
    Tip: The login-to-ocp command takes the same input as the oc login command. Run oc login --help for details.
  2. If you mirrored the images to a private container registry, update the global image pull secret so that the cluster can access the Kubernetes NFS-Client Provisioner images.

    The global image pull secret must contain the credentials of an account that can pull images from the private container registry:

    cpd-cli manage add-cred-to-global-pull-secret \
    ${PRIVATE_REGISTRY_LOCATION} \
    ${PRIVATE_REGISTRY_PULL_USER} \
    ${PRIVATE_REGISTRY_PULL_PASSWORD}
  3. Set the following environment variables:
    1. Set NFS_SERVER_LOCATION to the IP address or fully qualified domain name (FQDN) of the NFS server:
      export NFS_SERVER_LOCATION=<server-address>
    2. Set NFS_PATH to the exported path where you want the provisioner to create sub-directories. (The default path is /.)
      export NFS_PATH=<path>
    3. Set PROJECT_NFS_PROVISIONER to the project (namespace) where you want to deploy the Kubernetes NFS-Client Provisioner provisioner. The recommended project is nfs-provisioner; however you can specify any project.
      Important: You must specify an existing project (namespace).
      export PROJECT_NFS_PROVISIONER=<project-name>
    4. Set NFS_STORAGE_CLASS to the name that you want to use for the NFS storage class:
      export NFS_STORAGE_CLASS=<storage-class-name>

      By default, the documentation uses managed-nfs-storage, but you can pick a different storage class name.

    5. Set the NFS_IMAGE to the correct value for your Red Hat OpenShift Container Platform architecture:
      Architecture Command
      x86-64
      export NFS_IMAGE=k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
      ppc64le
      export NFS_IMAGE=gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
      s390x
      export NFS_IMAGE=gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
  4. Run the following command to set up dynamic provisioning:
    cpd-cli manage setup-nfs-provisioner \
    --nfs_server=${NFS_SERVER_LOCATION} \
    --nfs_path=${NFS_PATH} \
    --nfs_provisioner_ns=${PROJECT_NFS_PROVISIONER} \
    --nfs_storageclass_name=${NFS_STORAGE_CLASS} \
    --nfs_provisioner_image=${NFS_IMAGE}

    If the command succeeds, the storage class is ready to use.

  5. If you are using any of the following services, add the mountOptions entry to the storage class:
    • Db2
    • Db2 Warehouse
    • Watson Knowledge Catalog
    • OpenPages
    • Big SQL
    • Watson Query

    Run the following command to update the storage class:

    oc patch storageclass ${NFS_STORAGE_CLASS} \
    --type='json' \
    --patch='[{"op": "replace", "path": "/mountOptions", "value": ["nfsvers=3", "nolock"]}]'