Setting up Amazon Elastic File System

Amazon Elastic File System (EFS) does not support dynamic storage provisioning by default, and Red Hat® OpenShift® does not include a provisioner plug-in to create an NFS-based storage class. Therefore, you must set up dynamic storage provisioning on your Amazon Elastic File System.

Installation phase
You are not here. Setting up a client workstation
You are not here. Collecting required information
You are here icon. Preparing your cluster
You are not here. Installing the Cloud Pak for Data platform and services
Who needs to complete this task?
A cluster administrator or a storage administrator must complete this task.
When do you need to complete this task?
If you plan to use EFS storage, you must set up dynamic provisioning before you install Cloud Pak for Data.

If you plan to install Cloud Pak for Data from the AWS Marketplace, you can skip this task.

Before you begin

Best practice: You can run many of the commands in this task exactly as written if you set up environment variables for your installation. For instructions, see Setting up installation environment variables.

Ensure that you source the environment variables before you run the commands in this task.

About this task

The steps in this procedure use the Kubernetes NFS-Client Provisioner (from the Kubernetes SIGs organization) to set up dynamic provisioning with EFS storage.

Creating an EFS file system

Use the following guidance to create an EFS file system that is accessible from the cluster.

From your EC2 dashboard:

  1. From the navigation menu, select Instances.
  2. Identify a worker node and click the Instance ID of the node.
  3. Obtain the VPC ID and security group for the worker node:
    1. In the instance summary for the node, locate the VPC ID. Save the ID in a text file.
    2. Open the Security tab, and locate the Security groups. Save the ID to the text file.
  4. Obtain the CIDR for the VPC:
    1. From the instance summary, click the VPC ID to open the VPC Management Console.
    2. From the VPC Management Console, locate the CIDR for the VPC. Save the CIDR to the text file.
    3. Close the VPC Management Console.
  5. Edit the inbound rules for the security group:
    1. From the navigation menu, select Security Groups.
    2. Search for the security group that you identified in the preceding steps.
    3. Click the Security group ID.
    4. On the Inbound rules tab, click Edit inbound rules.
    5. Scroll to the end of the rules and click Add rule.
    6. Specify the following values:
      • For the Type, specify NFS.
      • For the Source, specify Custom.
      • In the search field, enter the CIDR value that you identified in the preceding steps.
    7. Click Save rules.
  6. Create the EFS file system:
    1. Go to https://console.aws.amazon.com/efs.
    2. Click Create file system. Then, click Customize
    3. On the File system settings page, give the file system a name. Then, click Next.
    4. On the Network access page, select the VPC that you identified in the preceding steps.
    5. For each availability zone in the VPC:
      • Select a private subnet ID.
      • Remove the default security group and replace it with the security group that you identified in the preceding steps.
    6. Click Next. Then, click Next again.
    7. On the Review and create page, click Create.
  7. Wait for the file system to be created. Write down the ID of the file system.

Mirroring the provisioner images to a private container registry

If you pull images from a private container registry, mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry. Complete the appropriate task for your environment:

Mirroring the provisioner images directly to a private container registry

To mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry:

  1. Log in to your private container registry:
    cpd-cli manage login-private-registry \
    ${PRIVATE_REGISTRY_LOCATION} \
    ${PRIVATE_REGISTRY_PUSH_USER} \
    ${PRIVATE_REGISTRY_PUSH_PASSWORD}

    If your private registry is not secured, see cpd-cli manage login-private-registry for additional options.

  2. Mirror the images to the private container registry:
    cpd-cli manage mirror-nfs-provisioner \
    --target_registry=${PRIVATE_REGISTRY_LOCATION} \
    --source_registry=k8s.gcr.io/sig-storage


Mirroring the provisioner images using an intermediary container registry

To mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry:

  1. Mirror the images to the intermediary container registry:
    cpd-cli manage mirror-nfs-provisioner \
    --target_registry=127.0.0.1:12443 \
    --source_registry=k8s.gcr.io/sig-storage
  2. Move the intermediary container registry behind the firewall.
  3. Log in to your private container registry:
    cpd-cli manage login-private-registry \
    ${PRIVATE_REGISTRY_LOCATION} \
    ${PRIVATE_REGISTRY_PUSH_USER} \
    ${PRIVATE_REGISTRY_PUSH_PASSWORD}

    If your private registry is not secured, see cpd-cli manage login-private-registry for additional options.

  4. Mirror the images to the private container registry:
    cpd-cli manage mirror-nfs-provisioner \
    --target_registry=${PRIVATE_REGISTRY_LOCATION} \
    --source_registry=127.0.0.1:12443

Getting the connection details for your Amazon Elastic File System

Before you can set up dynamic provisioning, you must obtain the DNS name or IP address of your Amazon Elastic File System:

DNS name (recommended)
You can obtain the DNS name from the AWS Console on the Amazon EFS > File systems. Select the file system that you want to use. The DNS name is in the General section.

The DNS name has the following format: <file-storage-id>.efs.<region>.amazonaws.com.

IP address
You can obtain the IP address from the AWS Console on the Amazon EFS > File systems. Select the file system that you want to use. The IP address is on the Network tab.

Configuring dynamic storage

To configure dynamic storage:

  1. Run the cpd-cli manage login-to-ocp command to log in to the cluster as a user with sufficient permissions to complete this task. For example:
    cpd-cli manage login-to-ocp \
    --username=${OCP_USERNAME} \
    --password=${OCP_PASSWORD} \
    --server=${OCP_URL}
    Tip: The login-to-ocp command takes the same input as the oc login command. Run oc login --help for details.
  2. If you mirrored the images to a private container registry, update the global image pull secret so that the cluster can access the Kubernetes NFS-Client Provisioner images.

    The global image pull secret must contain the credentials of an account that can pull images from the private container registry:

    cpd-cli manage add-cred-to-global-pull-secret \
    ${PRIVATE_REGISTRY_LOCATION} \
    ${PRIVATE_REGISTRY_PULL_USER} \
    ${PRIVATE_REGISTRY_PULL_PASSWORD}
  3. Set the following environment variables:
    1. Set EFS_LOCATION to the DNS name or IP address EFS server:
      export EFS_LOCATION=<location>
    2. Set EFS_PATH to the EFS exported path. (The default path is /.)
      export EFS_PATH=/
    3. Set PROJECT_NFS_PROVISIONER to the project (namespace) where you want to deploy the Kubernetes NFS-Client Provisioner provisioner. The recommended project is nfs-provisioner; however you can specify any project.
      Important: If you don't have the appropriate permissions to create projects, you must specify an existing project (namespace). If you have the appropriate permissions to create projects, the project is automatically created when you run the cpd-cli manage setup-nfs-provisioner command.
      export PROJECT_NFS_PROVISIONER=<project-name>
    4. Set EFS_STORAGE_CLASS to the name that you want to use for the EFS storage class. The recommended name is efs-nfs-client.
      export EFS_STORAGE_CLASS=efs-nfs-client
    5. Set the NFS_IMAGE to the correct value for your Red Hat OpenShift Container Platform architecture:
      Architecture Command
      x86-64
      export NFS_IMAGE=k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
      ppc64le
      export NFS_IMAGE=gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
      s390x
      export NFS_IMAGE=gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
      Tip: If you don't know the architecture of your cluster, run the following commands:
      1. Run the following command to get the list of nodes on the cluster:
        oc get nodes

        Copy the name of one of the nodes.

      2. Run the following command to get information about the node.

        Replace <node-name> with the appropriate name.

        oc get nodes <node-name> -o jsonpath='{.status.nodeInfo}' | jq

        The output contains the architecture.

  4. Run the following command to set up dynamic provisioning:
    cpd-cli manage setup-nfs-provisioner \
    --nfs_server=${EFS_LOCATION} \
    --nfs_path=${EFS_PATH} \
    --nfs_provisioner_ns=${PROJECT_NFS_PROVISIONER} \
    --nfs_storageclass_name=${EFS_STORAGE_CLASS} \
    --nfs_provisioner_image=${NFS_IMAGE}
  5. Confirm that dynamic provisioning is working:
    1. Confirm that the storage class was created:
      oc get sc

      Review the list of storage classes to ensure that it contains the EFS storage class. The default storage class name is efs-nfs-client.

    2. Confirm that the nfs-client-provisioner pod is running:
      oc get pods -n ${PROJECT_NFS_PROVISIONER}

      The status of the pod should be Running.

    3. Confirm that the test persistent volume claim (pvc) that is created by the provisioner is bound:
      oc get pvc -n ${PROJECT_NFS_PROVISIONER}

      The status of the persistent volume claim should be Bound.