Setting up shared storage for a remote physical location
If you plan to create Analytics Engine powered by Apache Spark service instances on the remote physical location, you must have shared storage that is accessible to the primary instance of IBM® Software Hub and the remote physical location. The shared storage must support dynamic provisioning.
- Who needs to complete this task?
-
Cluster administrator A cluster administrator must complete this task.
- When do you need to complete this task?
- Complete this task only if you plan to use the remote physical location to host Analytics Engine powered by Apache Spark service instances.
Before you begin
- The primary cluster
- The remote physical location
About this task
The following storage is supported for the shared storage:
- NFS
You must configure the remote physical location to support dynamic provisioning.
The steps in this procedure use the Kubernetes NFS-Client Provisioner (from the Kubernetes SIGs organization) to set up dynamic provisioning with NFS storage.
Your NFS server must be accessible from your Red Hat® OpenShift® Container Platform cluster and the remote physical location.
Mirroring the provisioner images to a private container registry
If you pull images from a private container registry, mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry. Complete the appropriate task for your environment:
- If your client workstation can connect to the internet and to the private container registry, you can mirror the images directly to your private container registry.
- If your client workstation cannot connect to the internet and to the private container registry, you must mirror images to an intermediary registry before you can mirror the images to your private container registry.
Mirroring the provisioner images directly to a private container registry
To mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry:
- Log in to your private container
registry:
cpd-cli manage login-private-registry \ ${PRIVATE_REGISTRY_LOCATION} \ ${PRIVATE_REGISTRY_PUSH_USER} \ ${PRIVATE_REGISTRY_PUSH_PASSWORD}If your private registry is not secured, omit the following arguments:${PRIVATE_REGISTRY_PUSH_USER}${PRIVATE_REGISTRY_PUSH_PASSWORD}
- Mirror the images to the private container
registry:
cpd-cli manage mirror-nfs-provisioner \ --target_registry=${PRIVATE_REGISTRY_LOCATION} \ --source_registry=registry.k8s.io/sig-storage
Mirroring the provisioner images using an intermediary container registry
To mirror the images for the Kubernetes NFS-Client Provisioner to your private container registry:
- Mirror the images to the intermediary container
registry:
cpd-cli manage mirror-nfs-provisioner \ --target_registry=127.0.0.1:12443 \ --source_registry=registry.k8s.io/sig-storage - Move the intermediary container registry behind the firewall.
- Log in to your private container
registry:
cpd-cli manage login-private-registry \ ${PRIVATE_REGISTRY_LOCATION} \ ${PRIVATE_REGISTRY_PUSH_USER} \ ${PRIVATE_REGISTRY_PUSH_PASSWORD}If your private registry is not secured, omit the following arguments:${PRIVATE_REGISTRY_PUSH_USER}${PRIVATE_REGISTRY_PUSH_PASSWORD}
- Mirror the images to the private container
registry:
cpd-cli manage mirror-nfs-provisioner \ --target_registry=${PRIVATE_REGISTRY_LOCATION} \ --source_registry=127.0.0.1:12443
Configuring dynamic storage
To configure dynamic storage:
-
Log the
cpd-cliin to the Red Hat OpenShift Container Platform cluster:${REMOTE_CPDM_OC_LOGIN}Remember:REMOTE_CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand when you are connecting to a remote cluster. - If you mirrored the images to a private container registry, update the global image pull secret
so that the cluster can access the Kubernetes NFS-Client Provisioner images.
The global image pull secret must contain the credentials of an account that can pull images from the private container registry:
cpd-cli manage add-cred-to-global-pull-secret \ --registry=${PRIVATE_REGISTRY_LOCATION} \ --registry_pull_user=${PRIVATE_REGISTRY_PULL_USER} \ --registry_pull_password=${PRIVATE_REGISTRY_PULL_PASSWORD} - Set the following environment variables:
- Set
NFS_SERVER_LOCATIONenvironment variable to the IP address or fully qualified domain name (FQDN) of the NFS server:export NFS_SERVER_LOCATION=<server-address> - Set
NFS_PATHenvironment variable to the exported path where you want the provisioner to create sub-directories. (The default path is /.)export NFS_PATH=<path> - Set
PROJECT_NFS_PROVISIONERenvironment variable to the project (namespace) where you want to deploy the Kubernetes NFS-Client Provisioner provisioner. The recommended project isnfs-provisioner; however, you can specify any project.Important: You must specify an existing project (namespace).export PROJECT_NFS_PROVISIONER=<project-name> - Set the
NFS_IMAGEenvironment variable to the location and name of thenfs-subdir-external-provisionerimage to use to set up dynamic provisioning:- Public registry
-
export NFS_IMAGE=registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 - Private container registry
-
export NFS_IMAGE=${PRIVATE_REGISTRY_LOCATION}/nfs-subdir-external-provisioner:v4.0.2
- Set
- Run the following command to create the NFS storage provisioner and storage
class:
cpd-cli manage setup-nfs-provisioner \ --nfs_server=${NFS_SERVER_LOCATION} \ --nfs_path=${NFS_PATH} \ --nfs_path_pattern=\${.PVC.annotations.nfs.io/storage-path} \ --nfs_provisioner_name=shared.fuseim.pri/ifs \ --nfs_provisioner_image=${NFS_IMAGE} \ --nfs_provisioner_ns=${PROJECT_NFS_PROVISIONER} \ --nfs_storageclass_name=managed-nfs-shared-storage
What to do next
Now that you've set up shared storage for the remote physical location, you're ready to complete Installing Analytics Engine powered by Apache Spark on a remote physical location.