Setting up dynamic provisioning
NFS does not support dynamic storage provisioning by default, and Red Hat® OpenShift® does not include a provisioner plug-in to create an NFS storage class. Therefore, you must set up dynamic storage provisioning on your NFS server.
- Installation phase
- Setting up a client workstation
- Who needs to complete this task?
- A cluster administrator must complete this task.
- When do you need to complete this task?
- If you plan to use NFS storage, you must set up dynamic provisioning before you install Cloud Pak for Data.
Before you begin
Ensure that you source the environment variables before you run the commands in this task.
- Db2®
- Db2 Warehouse
- Watson™ Knowledge Catalog
- OpenPages®
- Big SQL
- Watson Query
About this task
The steps in this procedure use the Kubernetes NFS-Client Provisioner (from the Kubernetes SIGs organization) to set up dynamic provisioning with NFS storage.
Your NFS server must be accessible from your Red Hat OpenShift Container Platform cluster.
Mirroring the provisioner images to a private container registry
- If your client workstation can connect to the internet and to the private container registry, you can mirror the images directly to your private container registry.
- If your client workstation cannot connect to the internet and to the private container registry, you must mirror images to an intermediary container registry before you can mirror the images to your private container registry.
Mirroring the provisioner images directly to a private container registry
To mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry:
- Log in to your private container
registry:
cpd-cli manage login-private-registry \ ${PRIVATE_REGISTRY_LOCATION} \ ${PRIVATE_REGISTRY_PUSH_USER} \ ${PRIVATE_REGISTRY_PUSH_PASSWORD}
If your private registry is not secured, see
cpd-cli manage login-private-registry
for additional options. - Mirror the images to the private container
registry:
cpd-cli manage mirror-nfs-provisioner \ --target_registry=${PRIVATE_REGISTRY_LOCATION} \ --source_registry=k8s.gcr.io/sig-storage
Mirroring the provisioner images using an intermediary container registry
To mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry:
- Mirror the images to the intermediary container
registry:
cpd-cli manage mirror-nfs-provisioner \ --target_registry=127.0.0.1:12443 \ --source_registry=k8s.gcr.io/sig-storage
- Move the intermediary container registry behind the firewall.
- Log in to your private container
registry:
cpd-cli manage login-private-registry \ ${PRIVATE_REGISTRY_LOCATION} \ ${PRIVATE_REGISTRY_PUSH_USER} \ ${PRIVATE_REGISTRY_PUSH_PASSWORD}
If your private registry is not secured, see
cpd-cli manage login-private-registry
for additional options. - Mirror the images to the private container
registry:
cpd-cli manage mirror-nfs-provisioner \ --target_registry=${PRIVATE_REGISTRY_LOCATION} \ --source_registry=127.0.0.1:12443
Configuring dynamic storage
To configure dynamic storage:
-
Run the
cpd-cli manage login-to-ocp
command to log in to the cluster as a user with sufficient permissions to complete this task. For example:cpd-cli manage login-to-ocp \ --username=${OCP_USERNAME} \ --password=${OCP_PASSWORD} \ --server=${OCP_URL}
Tip: Thelogin-to-ocp
command takes the same input as theoc login
command. Runoc login --help
for details. - If you mirrored the images to a private container registry, update the global image pull secret
so that the cluster can access the Kubernetes NFS-Client Provisioner images.
The global image pull secret must contain the credentials of an account that can pull images from the private container registry:
cpd-cli manage add-cred-to-global-pull-secret \ ${PRIVATE_REGISTRY_LOCATION} \ ${PRIVATE_REGISTRY_PULL_USER} \ ${PRIVATE_REGISTRY_PULL_PASSWORD}
- Set the following environment variables:
- Set
NFS_SERVER_LOCATION
to the IP address or fully qualified domain name (FQDN) of the NFS server:export NFS_SERVER_LOCATION=<server-address>
- Set
NFS_PATH
to the exported path where you want the provisioner to create sub-directories. (The default path is /.)export NFS_PATH=<path>
- Set
PROJECT_NFS_PROVISIONER
to the project (namespace) where you want to deploy the Kubernetes NFS-Client Provisioner provisioner. The recommended project isnfs-provisioner
; however you can specify any project.Important: You must specify an existing project (namespace).export PROJECT_NFS_PROVISIONER=<project-name>
- Set
NFS_STORAGE_CLASS
to the name that you want to use for the NFS storage class:export NFS_STORAGE_CLASS=<storage-class-name>
By default, the documentation uses
managed-nfs-storage
, but you can pick a different storage class name. - Set the
NFS_IMAGE
to the correct value for your Red Hat OpenShift Container Platform architecture:Architecture Command x86-64 export NFS_IMAGE=k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
ppc64le export NFS_IMAGE=gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
s390x export NFS_IMAGE=gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
- Set
- Run the following command to set up dynamic
provisioning:
cpd-cli manage setup-nfs-provisioner \ --nfs_server=${NFS_SERVER_LOCATION} \ --nfs_path=${NFS_PATH} \ --nfs_provisioner_ns=${PROJECT_NFS_PROVISIONER} \ --nfs_storageclass_name=${NFS_STORAGE_CLASS} \ --nfs_provisioner_image=${NFS_IMAGE}
If the command succeeds, the storage class is ready to use.
- If you are using any of the following services, add the
mountOptions
entry to the storage class:- Db2
- Db2 Warehouse
- Watson Knowledge Catalog
- OpenPages
- Big SQL
- Watson Query
Run the following command to update the storage class:
oc patch storageclass ${NFS_STORAGE_CLASS} \ --type='json' \ --patch='[{"op": "replace", "path": "/mountOptions", "value": ["nfsvers=3", "nolock"]}]'