Setting up NFS storage
By default, NFS does not support dynamic storage provisioning. If you plan to use Cloud Pak for Data for persistent storage, you must set up your NFS storage before you install Cloud Pak for Data.
Supported storage topology
If you use NFS storage, you can use one of following cluster configurations:
- NFS on a dedicated node in the same VLAN as the cluster (recommended)
- An external NFS server
If you select this option, configure the server based on your availability requirements and ensure that you have a sufficiently fast network connection (at least 1 GB) to reduce latency and ensure performance.
Ensure that the following statements are true:
- All of the nodes in the cluster must have access to mount the NFS server.
- All of the nodes in the cluster must have read/write access to the NFS server.
- Containerized processes must have read/write access to the NFS server.Important: Containerized processes create files that are owned by various UIDs. (In Cloud Pak for Data, most services use long UIDs based on the Red Hat® OpenShift® Container Platform project where they are installed.) If you restrict access to the NFS served to specific UIDs, you might encounter errors when installing or running Cloud Pak for Data.
For information on determining which UIDs are used, see Service UIDs.
- If you use NFS as the storage for a database service, ensure that the storage has sufficient throughput. For details, see the appropriate topic for your environment:
Setting the NFS export
Ensure that the NFS export is set to
Configuring dynamic storage
By default, Red Hat OpenShift does not include a provisioner plug-in to create an NFS storage class. To dynamically provision NFS storage, use the Kubernetes NFS-Client Provisioner, which is available from the Kubernetes SIGs organization on GitHub.
- Permissions you need for this task
- You must be a cluster administrator.
To configure dynamic storage:
- Ensure that your NFS server is accessible from your Red Hat OpenShift Container Platform cluster.
- Clone the https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner repository.
- Open a
bashshell and change to the deploy directory of the repository.
- Log in to your Red Hat OpenShift Container Platform cluster as
a user with sufficient permissions to complete the
oc login OpenShift_URL:port
- Authorize the provisioner by running the following commands.
- Create the required role based access control.Important: If you plan to deploy the NFS provisioner to a project other than the
defaultproject, you must replace each instance of
defaultin the rbac.yaml file before you run this command.
oc create -f rbac.yaml
- Add the
nfs-client-provisionersecurity context constraint to the
If you plan to deploy the NFS provisioner to a project other than the
defaultin the following command.
oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:default:nfs-client-provisioner
- Create the required role based access control.
- Edit the deployment.yaml file in the deploy directory
to specify the following information:
- The project (namespace) where the NFS provisioner is deployed.
- The image that corresponds to your Red Hat OpenShift Container Platform architecture:
- The hostname of your NFS server.
- The path where you want to dynamically provision storage on your NFS server.
apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner namespace: default # Specify the namespace where the NFS provisioner is deployed spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: nfs-provisioner-image # Specify the appropriate image based on your OpenShift architecture volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: nfs-storage - name: NFS_SERVER value: MyNFSHostname # Specify the host name of your NFS server - name: NFS_PATH value: /nfs/cpshare/ # Specify the path where you want to provision storage volumes: - name: nfs-client-root nfs: server: MyNFSHostname # Specify the host name of your NFS server path: /nfs/cpshare/ # Specify the path where you want to provision storage
- Deploy the NFS
oc create -f deployment.yaml
- Edit the class.yaml file to specify the names of the storage classes that
you want to create. The following example includes the recommended
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage # Recommended storage class name provisioner: nfs-client-provisioner # This name must match the value you specified in the deployment.yaml parameters: archiveOnDelete: "false"
For a complete list of parameters, see Deploying your storage class in the NFS provisioner documentation.
- Create the storage
oc create -f class.yaml
- Verify that the NFS provisioner is
- Create a test persistent volume claim (PVC). Note: The test-claim.yaml file uses the
oc create -f test-claim.yaml -f test-pod.yaml
- On your NFS server, verify that the share directory, which you specified in the deployment.yaml file, contains a file called SUCCESS.
- Remove the test
oc delete -f test-pod.yaml -f test-claim.yaml
- Create a test persistent volume claim (PVC).