Deploying storage client in the Hosted Control Plane clusters
Steps to deploy storage client in the Hosted Control Plane clusters.
Before you begin
To enable replica -1:
If replica -1 needs to be enabled on the external storage client cluster, ensure that the cluster uses the same topology and that the nodes are labeled in the same way as the hub (provider) nodes. For instructions about enabling replica-1 for external storage clients, see Steps to enable replica -1.
Procedure
Create a storage client in the Hosted Control Plane
clusters.
- Deploy the IBM Fusion in the Hosted Control Plane. For procedure to deploy, see Installing the IBM Fusion base.
- Install the Data Foundation service and use the
following YAML file to create a
FusionServiceInstanceCR. For steps to install Data Foundation, see Installing Fusion Data Foundation and Configuring Fusion Data Foundation in provider mode.apiVersion: service.isf.ibm.com/v1 kind: FusionServiceInstance metadata: name: odfmanager namespace: ibm-spectrum-fusion-ns spec: creator: User doInstall: true parameters: - name: namespace provided: false value: openshift-storage - name: creator provided: false value: Fusion - name: backingStorageType provided: false value: Client - name: autoUpgrade provided: true value: 'false' - name: enableLVMStorage provided: false value: 'false' serviceDefinition: data-foundation-service triggerUpdate: false updateServiceCRSpec: false - Create
NetworkPolicyto allow the storage client to connect Data Foundation provider in the host cluster.Important: This step needs to be done in the host cluster.- Run the following command to get the Hosted Control Plane
pods
namespace.
oc get hostedcontrolplanes.hypershift.openshift.io -A | grep <your-hcp-name> - Create the following YAML and replace the namespace with your Hosted Control Plane cluster. The default value is
clusters-<hcp-cluster-name>.apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: openshift-storage-egress namespace: <your-hcp-namespace> spec: egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-storage podSelector: matchLabels: kubevirt.io: virt-launcher policyTypes: - Egress
- Run the following command to get the Hosted Control Plane
pods
namespace.
- Generate the client onboarding token from the user interface as follows:Important: The onboarding token is valid for 48 hours and can be used only once.
- Log in the OpenShift® Container Platform web console.
- Go to .
- Click Generating client onboarding token to generate the onboarding
token.
As soon as you click the Generating client onboarding token, the Client onboarding token page is displayed.
- Click Copy to clipboard to copy the token to the clipboard.
- Get the
storageProviderEndpointas follows:- Use the
storageProviderEndpointcluster IP. - Run the following command to get the
storageProviderEndpointcluster IP.
Example output:oc get svc -n openshift-storage ocs-provider-serverNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ocs-provider-server ClusterIP 172.30.3.161 <none> 50051/TCP 53d
- Use the
- Create Data Foundation cluster CR as follows:
- Create a YAML for Data Foundation cluster CR and
replace the
ONBOARDING_TOKENandPROVIDER_SERVICE_ENDPOINTwith the correct value.For example:apiVersion: odf.isf.ibm.com/v1 kind: OdfCluster metadata: name: odfcluster namespace: ibm-spectrum-fusion-ns spec: storageClient: enable: true onboardingTicket: $ONBOARDING_TOKEN storageProviderEndpoint: '$PROVIDER_SERVICE_ENDPOINT:50051' creator: CreatedByFusion - Apply the Data Foundation cluster CR to create the
storageclient.
- Create a YAML for Data Foundation cluster CR and
replace the
- Verify the
storageclientstatus from the IBM Fusion user interface. - Verify the Storage Client and StorageClasses details from the OCP user interface.
What to do next
Steps to enable replica -1:
To enable replica -1 for external storage clients, do as follows:
-
Enable the replica -1.
Command example:oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ "op": "replace", "path": "/spec/managedResources/cephNonResilientPools/enable", "value": true }]' - Add the following annotation to the
StorageConsumer.failureDomain=$(oc get storagecluster -n openshift-storage -o jsonpath='{.items[0].status.failureDomainKey}' ) oc annotate storageconsumer <consumer-name> \ -n openshift-storage \ ocs.openshift.io/non-resilient-pools-topology-key="$failureDomain"