Setting up shared storage for a remote physical location

If you plan to create Analytics Engine powered by Apache Spark service instances on the remote physical location, you must have shared storage that is accessible to the primary instance of IBM® Software Hub and the remote physical location. The shared storage must support dynamic provisioning.

Who needs to complete this task?

Cluster administrator A cluster administrator must complete this task.

When do you need to complete this task?
Complete this task only if you plan to use the remote physical location to host Analytics Engine powered by Apache Spark service instances.

Before you begin

Best practice: You can run the commands in this task exactly as written if you use set up environment variables for the remote physical location in addition to the installation environment variables script. For instructions, see Setting up environment variables for a remote physical location.
Before you run the commands in this task, ensure that you source the environment variables for:
  • The primary cluster
  • The remote physical location

About this task

The following storage is supported for the shared storage:

  • NFS

You must configure the remote physical location to support dynamic provisioning.

The steps in this procedure use the Kubernetes NFS-Client Provisioner (from the Kubernetes SIGs organization) to set up dynamic provisioning with NFS storage.

Important: You must have an existing NFS server to complete this task. Ensure that you know how to connect to your NFS server. At a minimum, you must have the hostname of the server.

Your NFS server must be accessible from your Red Hat® OpenShift® Container Platform cluster and the remote physical location.

Mirroring the provisioner images to a private container registry

If you pull images from a private container registry, mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry. Complete the appropriate task for your environment:


Mirroring the provisioner images directly to a private container registry

To mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry:

  1. Log in to your private container registry:
    cpd-cli manage login-private-registry \
    ${PRIVATE_REGISTRY_LOCATION} \
    ${PRIVATE_REGISTRY_PUSH_USER} \
    ${PRIVATE_REGISTRY_PUSH_PASSWORD}
    If your private registry is not secured, omit the following arguments:
    • ${PRIVATE_REGISTRY_PUSH_USER}
    • ${PRIVATE_REGISTRY_PUSH_PASSWORD}
  2. Mirror the images to the private container registry:
    cpd-cli manage mirror-nfs-provisioner \
    --target_registry=${PRIVATE_REGISTRY_LOCATION} \
    --source_registry=registry.k8s.io/sig-storage


Mirroring the provisioner images using an intermediary container registry

To mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry:

  1. Mirror the images to the intermediary container registry:
    cpd-cli manage mirror-nfs-provisioner \
    --target_registry=127.0.0.1:12443 \
    --source_registry=registry.k8s.io/sig-storage
  2. Move the intermediary container registry behind the firewall.
  3. Log in to your private container registry:
    cpd-cli manage login-private-registry \
    ${PRIVATE_REGISTRY_LOCATION} \
    ${PRIVATE_REGISTRY_PUSH_USER} \
    ${PRIVATE_REGISTRY_PUSH_PASSWORD}
    If your private registry is not secured, omit the following arguments:
    • ${PRIVATE_REGISTRY_PUSH_USER}
    • ${PRIVATE_REGISTRY_PUSH_PASSWORD}
  4. Mirror the images to the private container registry:
    cpd-cli manage mirror-nfs-provisioner \
    --target_registry=${PRIVATE_REGISTRY_LOCATION} \
    --source_registry=127.0.0.1:12443

Configuring dynamic storage

To configure dynamic storage:

  1. Log the cpd-cli in to the Red Hat OpenShift Container Platform cluster:
    ${REMOTE_CPDM_OC_LOGIN}
    Remember: REMOTE_CPDM_OC_LOGIN is an alias for the cpd-cli manage login-to-ocp command when you are connecting to a remote cluster.
  2. If you mirrored the images to a private container registry, update the global image pull secret so that the cluster can access the Kubernetes NFS-Client Provisioner images.

    The global image pull secret must contain the credentials of an account that can pull images from the private container registry:

    cpd-cli manage add-cred-to-global-pull-secret \
    --registry=${PRIVATE_REGISTRY_LOCATION} \
    --registry_pull_user=${PRIVATE_REGISTRY_PULL_USER} \
    --registry_pull_password=${PRIVATE_REGISTRY_PULL_PASSWORD}
  3. Set the following environment variables:
    1. Set NFS_SERVER_LOCATION environment variable to the IP address or fully qualified domain name (FQDN) of the NFS server:
      export NFS_SERVER_LOCATION=<server-address>
    2. Set NFS_PATH environment variable to the exported path where you want the provisioner to create sub-directories. (The default path is /.)
      export NFS_PATH=<path>
    3. Set PROJECT_NFS_PROVISIONER environment variable to the project (namespace) where you want to deploy the Kubernetes NFS-Client Provisioner provisioner. The recommended project is nfs-provisioner; however, you can specify any project.
      Important: You must specify an existing project (namespace).
      export PROJECT_NFS_PROVISIONER=<project-name>
    4. Set the NFS_IMAGE environment variable to the location and name of the nfs-subdir-external-provisioner image to use to set up dynamic provisioning:
      Public registry
      export NFS_IMAGE=registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
      Private container registry
      export NFS_IMAGE=${PRIVATE_REGISTRY_LOCATION}/nfs-subdir-external-provisioner:v4.0.2
  4. Run the following command to create the NFS storage provisioner and storage class:
    cpd-cli manage setup-nfs-provisioner \
    --nfs_server=${NFS_SERVER_LOCATION} \
    --nfs_path=${NFS_PATH} \
    --nfs_path_pattern=\${.PVC.annotations.nfs.io/storage-path} \
    --nfs_provisioner_name=shared.fuseim.pri/ifs \
    --nfs_provisioner_image=${NFS_IMAGE} \
    --nfs_provisioner_ns=${PROJECT_NFS_PROVISIONER} \
    --nfs_storageclass_name=managed-nfs-shared-storage