General Page
System configuration for this deployment
Set up an IBM Cloud Account for Secure Landing Zone
Set up Account Access (Cloud IAM)
Set up repository authorization
Install the HPCS Services and keys
Install the HPCS Services from the IBM Cloud catalog
Create a Hyper Protect Crypto Services root key
Retrieve the service instance GUID
Setting Service ID API Key permissions
Setting up SLZ for HPCS Service
Set up VPN certificates and store them in Secrets Manager
Install the HPCS-integrated Portworx service from the IBM Cloud catalog
Create the px-ibm secret in the portworx namespace
Install Portworx Enterprise from the IBM Cloud catalog
Create the Portworx storage class
Create your persistent volume claim (PVC)
Upgrade the Portworx Enterprise Service
Uninstall and wipe the Portworx service
Increasing cloud drive storage
Expanding the size of a persistent volume claim (PVC)
Portworx asynchronous disaster recovery example with IBM FTM for Check
Scale down FTM for Check on the destination cluster
Set up Portworx asynchronous disaster recovery
Cloud drives and how they compare with block storage
System configuration for this deployment
|
Component |
Used for this environment |
Comments |
|
IBM Cloud Account -Payment Model |
Pay-As-You-Go |
Can use Enterprise account or Pay-As-You-Go Payment Model. |
|
IBM Cloud Account – Financial Services Validated |
Enabled |
Recommended being enabled |
|
IBM Cloud Account - IAM Privileges |
Part of Admin, Editor, Writer access roles. |
Admin, Editor, and Writer access roles are recommended. |
|
IBM Cloud Account – API Key Owner |
Part of admin group |
Admin group is recommended. |
|
IBM Cloud Account – Resource Group |
ftm-slz-03 |
Can be any string. Used for distinguishing the resources. |
|
IBM ROKS Cluster – Cluster Name |
slz-03-wrkld-cluster |
Can be any string. |
|
IBM ROKS Cluster – Red Hat OpenShift Container Platform Version |
4.11 |
Minimum 4.10 is recommended. |
|
IBM ROKS Cluster – Cluster Zones |
3 Zones (Dallas 1, Dallas2, Dallas 3) |
Recommended that use three zones. The zones can be different according to your cloud account. |
|
IBM ROKS Cluster – Master Nodes Configuration |
3 Master nodes (1 per zone) Memory: 16 GB Disk: 100 GB |
These values are the minimum requirement for the master node of ROKS cluster. |
|
IBM ROKS Cluster – Worker Nodes Configuration |
Twelve worker Nodes (4 per zone) bx2.16X64 Memory: 64 GB Disk: 100 GB |
Can use the minimum configuration as cx2.8X16. vCPU: 8 Disk: 100 GB |
|
SLZ Toolchain – Code Repository |
GitHub and GitHub Enterprise |
Can use anything from: |
|
SLZ Toolchain - AccessToken |
GitHub and GitHub Enterprise |
Must be from the same repository. |
|
SLZ Toolchain – GitHub URL |
https://us-south.git.cloud.ibm.com/users/sign_in |
Differs according to the cloud account and the repository chosen. |
|
SLZ Toolchain – DNS IPs |
192.0.2.0/11 |
It is different for the user’s cloud account and region. |
|
HPCS – Name of Instance |
HPCS-shared-fs-cloud-01 |
Can have any human-readable string. |
|
HPCS – Root Key Name |
FTM_ROOT_KEY |
Recommended having ROOT_KEY in the name to distinguish as ROOT KEY. |
|
HPCS-Base Endpoint URL |
For the Dallas region, the value for private cluster (SLZ) is |
It is different for the user’s cloud account and region. |
|
HPCS-Token Endpoint URL |
For the Dallas region, the value for private cluster (SLZ) is |
It is different for the user’s cloud account and region. |
|
Portworx – Service Version |
Portworx - Enterprise |
Recommend that you use “Portworx Enterprise with Disaster-Recovery” for production environments. |
|
Portworx - Region |
Dallas(us-south) |
Must be the same region where SLZ and HPCS are configured. |
|
Portworx-Namespace in Red Hat OpenShift |
portworx |
Recommended having a namespace for easy identification. |
|
Portworx – secret name |
px-ibm |
Can have any human-readable string. |
|
Portworx-Storage Class name |
ibmc-vpc-block-10iops-tier |
Recommended that use 10 IOPS. |
|
Portworx-Installation Version |
Portworx:2.11.4, Stork:2.12.1 |
It is different according to the worker node OS version. For more information, see the Portworx documentation. |
Table of contents
An IBM Cloud account is required. An Enterprise account is recommended but a Pay-As-You-Go account can be used to deploy secure landing zone (SLZ) cloud resources.
If you do not already have an account, follow the instructions to create the account and upgrade it to Pay-As-You-Go. If you have access to an IBM Cloud account, an Enterprise account is recommended but a Pay-As-You-Go account must also work with this automation.
Set up an IBM Cloud Account for Secure Landing Zone
- Log in to the IBM Cloud console that uses the IBMid that you used to set up the account. This IBMid user is the account owner and has all the IAM accesses.
- Complete the company profile and contact information for the account. This information is required to stay in compliance with the IBM Cloud Financial Service profile.
- Enable the flag to designate your IBM Cloud account to be Financial Services Validated.
Set up account access (Cloud IAM)
- Create an IBM Cloud API key. Users who own this key must be part of the admin's group. Necessary if manually provisioning.
- Set up MFA for all IBM Cloud IAM users.
- Set up Cloud IAM access groups. User access to cloud resources is controlled by using the Access policies that are assigned to Access Groups. IBM Cloud Financial Services profile requires that all IAM users do not get assigned any access directly to any cloud resources. When you assign access policies, click All Identity Access Enabled Services from the list.
- You need to be assigned the Editor platform access role and the Writer service access role in IBM Cloud Identity and Access Management for HPCS. For more information, see
https://cloud.ibm.com/docs/containers?topic=containers-storage_portworx_encryption.
Set up repository authorization
The toolchain requires authorization to access your repository. If it does not have access, the toolchain requests that you authorize access. The following links show how you can create a personal access token for your repository.
You can manage your authorizations by using Manage Git Authorizations.
Table of contents
Install the HPCS Services and keys
Install the HPCS Services from the IBM Cloud catalog
Create an HPCS Service from the IBM Cloud catalog.
https://cloud.ibm.com/catalog/services/hyper-protect-crypto-services
- Select the us-south region.
- Select the Standard pricing plan.
- Enter the service name or keep the default randomized one.
- Select any resource group.
- Add any tags as needed for identification.
- For the Number of crypto units, enter 2 and leave cross-region failovers as 0.
- From the Allowed Networks list, select Public and private.
- Review the costs, accept the license agreements, and then click Create. You must be able to see the service in the IBM Cloud Console Resource List.

Create a Hyper Protect Crypto Services root key
- Open the HPCS instance from the Resource List.
- In the KMS keys tab, in the Keys table, click Add key, and then select Create a key.
- To create a key, enter the following specifications:
- Key Type: root
- Key Name: A unique name for your key.
- Key Alias: Optionally, a human-readable string.
- Key ring ID: Keep the default.

- Create the key.

Retrieve the service instance GUID
Use the following command to get this GUID.
ibmcloud resource service-instance <hpcs-instance-name>

Setting Service ID API Key permissions
- In IAM, select Service IDs and then click Create.
- Add a description and name, then click Create. Also, add these permissions for Access Groups and Access policies.

- Click the service ID and find Manage service ID.
- Click API Keys. Create one API key and download it for later use.
Table of contents
Setting up SLZ for HPCS Service
- Inside IBM Cloud, create a toolchain resource.
- Search for “Deploy infrastructure as code for the IBM Cloud for Financial Services”.
- Give the toolchain a meaningful name, such as
slz-toolchain-slz-03. - Select a region.
- Create a meaningful resource group. For example,
ftm-slz-03. - Click Continue.
- Give the toolchain a meaningful name, such as
- Select the deployment pattern (typically ROKS on VPC), and then click Continue.
- Set up the Git repository for the platform code.
- Leave the default Git. It defaults to
https://ussouth.git.cloud.ibm.com/users/sign_in. - Enter a meaningful repository name. For example,
slz-roks-slz-03.
This repository is created in the current user’s Git workspace.
- Leave the default Git. It defaults to
- Give it an API key with access to provision within the IBM Cloud account.
- Click Create Toolchain.
- Search for “Deploy infrastructure as code for the IBM Cloud for Financial Services”.
- Change code in the repository for the SLZ environment. For example, the naming convention is slz-01, slz-02, and so on. Consider the node flavor when you are doing the master setup.
- Update the
patterns/roks/terraform.tfvarsfile or other for a different pattern. You need to use a toolchain because you cannot use the source open-toolchain repo code directly from a Terraform repository. This code cannot be used as a Terraform module. An example for slz-03 is shown in the following figure.
- Commit the code back to the repository.
- Update the
- Run the infrastructure toolchain in IBM Cloud.
- In the Toolchain user interface, search for Delivery Pipeline infrastructure-pipeline and then click it.
- Click Run pipeline.
- This process might take a few hours.
- In some cases, it might fail. Running the pipeline a second or third time usually allows it to complete successfully.
Set up VPN certificates and store them in Secrets Manager
For more information, see Managing VPN server and client certificates.
Complete the following steps to set up VPN certificates and store them in Secrets Manager.
- Create an Easy-RSA certificate for the VPN connection.
- Clone the Easy-RSA 3 repository into your local folder.
git clone https://github.com/OpenVPN/easy-rsa.gitcd easy-rsa/easyrsa3 - Create a PKI and CA.
./easyrsa init-pki./easyrsa build-ca nopass - Check that the CA certificate file was generated
./pki/ca.crt. - Generate a VPN server certificate. Use the name of VPN as the server name.
./easyrsa build-server-full slz-03-wrkld-01-vpn nopassvpn-server.vpn.ibm.com nopass - Check that the following files were generated.
- VPN server public key file:
./pki/issued/<VPN NAME>.crt - Private key file:
./pki/private/<VPN NAME>.key
- VPN server public key file:
- Clone the Easy-RSA 3 repository into your local folder.
- Import certificates into the Secrets Manager.
- Go into the Secrets Manager instance that you want to use.
- Click Add +.
- Select TLS Certificate.
- Select Import a certificate.
- Provide a certificate name. For example,
slz-01-wrkld-vpn-02(VPN name + number of certs). - Select a secrets group (named per SLZ environment for isolation).
- Click Browse and then select
./pki/issued/<VPN NAME>.crtas the certificate file. - Click Browse and then select
./pki/private/<VPN NAME>.keyas the private key of the certificate. - Click Browse and then select
./pki/ca.crtas the intermediate certificate. - Optional: Enter a description.
- Click Import.
- Provide a certificate name. For example,
- Set up a VPN Server.
The VPN server is set up in the Management VPC and specifically the Management VPN Subnet for the SLZ cluster that you created.
- Create the VPN Server.
- Go to IBM Cloud > VPC > VPNs.
- Go to the Client to site servers tab.
- Click Create.
- Give it a meaningful name. For example,
slz-01-mgmt-01-vpn. - Select the correct geography based on where the Management VPC is located.
- Select the management resource group for the SLZ environment that you set up. For example,
slz-01-mgmt-rg. - Select the stand-alone mode.
- Select the subnets to attach. These subnets must be the VSI subnets and not the VPE subnets. You need to search in the cloud user interface for the new VPC, and identify them by Classless Inter-Domain Routing (CIDR) (For example, *.*.*.0/24) in the VPN user interface.
- In the authentication section.
- The certificate source must be Secrets Manager.
- Select ftm-secrets-manager-01.
- Pick the server SSL certificate that you created in the step Setup VPN Certificates and store them in Secrets Manager.
- Clear Client Certificate.
- Select User ID and passcode. The user ID and passcode that is generated from IBM Cloud Passcode is used.
- In additional configuration, provide the following values.
- DNS server 1: Enter 161.26.0.10. For more information about the DNS server addresses in IBM Cloud, see this link.
- DNS server 2: Enter 161.26.0.11.
- Transport protocol: UDP
- VPN Port: 443
- Tunnel mode: Split tunnel
- Click Create VPN Server.
- Give it a meaningful name. For example,
- Go to your newly created VPN Server config.
- Go to the VPN Server Routes tab.
- Create five routes in total. Two are general values that you copy from these instructions and three are subnet IPs that you need to look up.
- Create DNS.
- Click Create.
- Name: DNS
- Destination CIDR: XXX.XXX.XXX.XXX/24
- Action: Translate
- Create Master.
- Click Create.
- Name: Master
- Destination CIDR: XXX.XXX.XXX.XXX/14
- Action: Translate
- Create
to-wrkld-<slz-env-number>-<zone 1>:- Click Create.
- Name:
to-wrkld-<slz-env-number>-<zone 1>(For example,to-wrkld-03-dal01). - Destination CIDR: get VSI and not the VPE subnet CIDR for the correct zone. For example, *.*.*.*/24
- Action: Translate
- Create
to-wrkld-<slz-env-number>-<zone 2>:- Click Create.
- Name:
to-wrkld-<slz-env-number>-<zone 2>(For example,to-wrkld-03-dal02). - Destination CIDR: get VSI and not VPE subnet CIDR for the correct zone. For example, *.*.*.*/24
- Action: Translate
- Create
to-wrkld-<slz-env-number>-<zone 3>:- Click Create.
- Name: to-wrkld-<slz-env-number>-<zone 3> (For example,
to-wrkld-03-dal03). - Destination CIDR: get VSI not VPE subnet CIDR for the correct zone. For example, *.*.*.*/24
- Action: Translate
- Create DNS.
- Go to the Attached security groups tab.
- Click the security group (it has some random name).
- Go to the Rules tab.
- Create an Inbound rule.
- Protocol: UDP
- Port Range: 443 - 443
- Source type: Any
- Action: Create
- Set up subnets for the VPN.
- Identify any of the VSI subnets for your cluster and go into it.
- Find the link to the subnet access control list attached to that subnet.
- It has a name such as
slz-03-mgmt-acl. - Observe the contents of the ACL carefully.
- Set up the VPN client.
- Go back to IBM Cloud > VPC > VPNs > Your new VPN server.
- Click the Clients tab.
- Click Download client profile. You can download the profile that is required for a VPN client to connect to this VPN.
- Distribute this profile to everyone who needs it.
- To log in, use the w3 username and the passcode generated from the IBM Cloud Passcode link.
- Create the VPN Server.
Table of contents
Install the HPCS-integrated Portworx service from the IBM Cloud catalog
Create the px-ibm secret in the portworx namespace
Create the portworx namespace if it is not created already.
oc create -f - << EOF
apiVersion: v1
kind: Secret
metadata:
name: px-ibm
namespace: portworx
type: Opaque
data:
IBM_SERVICE_API_KEY: $(echo -n <API_KEY> | base64)
IBM_INSTANCE_ID: $(echo -n <GUID> | base64)
IBM_CUSTOMER_ROOT_KEY: $(echo -n <CUSTOMER_ROOT_KEY>| base64)
IBM_BASE_URL: $(echo –n <BASE_ENDPOINT> | base64)
IBM_TOKEN_URL: $(echo -n <TOKEN_ENDPOINT> | base64)
EOF
Where
- <API_KEY> is the Service ID API Key (with the right permissions).
- <GUID> is a section of the ID of the HPCS service. For example, if the ID is
crn:v1:bluemix:public:hs-crypto:us-south:a/10cdfea524a24e62aecaff8b7e70f660:31e7fb32-b4bf-422f-9285-7484c1e955ce::
use the string31e7fb32-b4bf-422f-9285-7484c1e955ceto do the encoding. - <CUSTOMER_ROOT_KEY> is the ID of the root key.
- <BASE_ENDPOINT> is the KMS endpoint URL. Do not remove the protocol and port. Find it on the HPCS service Overview page. For the Dallas region, the value for private cluster (SLZ) is
https://api.private.us-south.hs-crypto.cloud.ibm.com:12683. - <TOKEN_ENDPOINT> is the token endpoint.
For the Dallas region, the value for private cluster (SLZ) ishttps://private.us-south.iam.cloud.ibm.com/identity/token.
Install Portworx Enterprise from the IBM Cloud catalog
Note: If you are using Db2U storage, the Db2U storage classes have the parameter repl: 3. Make sure that at least 3 nodes have storage.
- Select the same region as the cluster and HPCS service.
- Select the pricing plan as Enterprise. (You need to use Enterprise-DR for a production environment)
- Set a service name for your identification purposes.
- Add the tags: cluster: <cluster-ID> for identification later.
- Set the resource group to the resource group of the cluster.
- Enter the API Key for your account. The Red Hat OpenShift cluster field is populated with the cluster name.
- For the Portworx cluster name, enter any string. It prefixes your Portworx pod names. For example,
portworx-slz-01. - For namespace, enter the namespace where you want Portworx to be installed. It is recommended to keep it as
kube-system. - If you did not attach any block storage manually, you can do so in a managed way. For cloud drives, you can select use Cloud Drives.
- Number of drives: Number of drives to provision on each node in each worker node. Depending on this integer, you might get multiple storage class and size fields.
- Max Storage Nodes Per Zone: Number of nodes you want per zone to be equipped with the number of drives.
- Storage Class name: The class of storage for this drive. Select
ibmc-vpc-block-10iops-tier. - Size: Enter the size in Gi (1000 MB) for each drive.
- For metadata key-value, select the Portworx KVDB.
- For Secrets Store, select IBM Key Protect | HPCS.
- Leave the Advanced options, CSI enable, and Helm parameters fields.
- Select the Portworx version. For worker machines with RHEL8, select 2.11.4.
- The service must show status as Active in 5-15 minutes.

- In the specified project, you can see the Portworx pods. Ensure that they are running and ready. Also, check whether the Portworx StorageCluster is online.
oc get po -n kube-system -lname=portworx oc get stc -n kube-system oc get storagenodes -n kube-system
To verify whether HPCS is authenticated to get into any worker node and find the entry in the journal, use the following commands.
oc debug node/<any-worker-node>
chroot /host
journalctl -leu portworx* | grep -i authentic
Look for the authenticated message:
If you see any other error about authentication, re-create the secret in the kube-system namespace and restart Portworx. For more information, see Restart the Portworx service.
Create the Portworx storage class
Create a Portworx-specific storage class by using the following command.
oc create –f - << EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata: name: px-sharedv4-sc
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "1"
sharedv4: "true"
sharedv4_svc_type: "ClusterIP"
EOF
Create your persistent volume claim
Create a persistent volume claim (PVC) by using the previously created storageclass px-sharedv4-sc. The volume that is associated to the persistent volume (PV) is encrypted by a unique secret.
oc create -f - << EOF
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sample-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: px-sharedv4-sc
---
EOF
Describe the PV claimed by the PVC and find the value of Volume ID.
oc describe pv <pv-name> | grep VolumeID
oc debug node/<any-node-ip>
chroot /host
pxctl volume inspect <volume-id>

Table of contents
To restart the Portworx service, simply add an environment variable in the StorageCluster.spec.env and save it. All Portworx pods restart.

Upgrade the Portworx Enterprise Service
To upgrade the Portworx service, update the version in the StorageCluster.
oc edit stc <storagecluster-name>
.spec.version: <new-version>

Save the changes. The Portworx pods restart one at a time.
Uninstall and wipe the Portworx service
No objects of the following type related to Portworx must appear in these commands:
oc get po, ds, stc, storagenode, pdb
oc get cm | grep px-bootstrap
oc get cm | grep px-cloud-drive
oc get secrets | grep helm
The Portworx operator deployment does not need to be deleted.
- Update the storage cluster and add the delete strategy.
oc get stc oc edit stc <storagecluster-name>
- Delete the storage cluster. Wait for the wiper pods to finish and for all the Portworx pods to be deleted.
oc delete stc <storagecluster-name> watch oc get po -n kube-system - Delete the Portworx Enterprise service.

Key rotation is done to rotate your data encryption keys.
Open your HPCS service and go to KMS Keys tab. Click the 3-dot menu for the specific key and click Rotate Key. You must see the last updated timestamp change. Your application remains unaffected.

Table of contents
Increasing cloud drive storage
You cannot change the size of the provisioned cloud drives after they are set. You can increase only the number of nodes per zone. You cannot decrease it.
oc edit storagecluster/<portworx-storage-cluster> -n kube-system
You can increase the max_storage_nodes_per_zone in the annotation to have more worker nodes per zone with storage drives. The size cannot be changed, so do not update the size value in the annotation.

You can safely replace a node with or without storage attached. These steps can be used when the Red Hat OpenShift version of the node is updated or while a faulty node is replaced.
- Choose the node that you want to replace and use the debug command to access it.
oc debug node/<node-ip>

- Enter Portworx maintenance mode by using the following command.
pxctl service maintenance –enter - Update or replace the node. Wait for a new node to start and go to Ready in IBM Cloud console. The wait might take 5-10 minutes.
ibmcloud ks worker --update -w <node-id> -c <cluster-id>
ibmcloud ks worker ls -c <cluster-id>
Portworx and a cloud drive are attached to the node to maintain the storage nodes per zone value. - Exit maintenance mode on any node that is still in maintenance.
oc rsh <pod-for-node-still-in-maintenance> /opt/pwx/bin/pxctl service maintenance --exit

- The status shows all nodes as online. The newest one provides storage.
/opt/pwx/bin/pxctl status
Expanding the size of a persistent volume claim (PVC)
The following steps are for resizing the PVC.
- On the Red Hat OpenShift web console, go to your PVC and select Actions > Expand PVC. Set the value in GiB to the value that you want to PVC to increase to.
- In the Portworx CLI, edit the PVC size.
oc debug node/<any-node-ip> chroot /host pxctl volume update <volume-id> --size=<size-in-GiB>
Table of contents
Portworx asynchronous disaster recovery example with IBM FTM for Check
You can set up Portworx asynchronous disaster recovery with a unidirectional ClusterPair and test it by using IBM FTM for Check 4.0.5.1 with IBM Db2U HADR.
The following list shows the infrastructure that you need.
- Two Red Hat OpenShift Kubernetes Service (ROKS) clusters in the same or different regions. For more information about a system configuration that you can use to configure these clusters, see System configuration for this deployment.
- IBM Cloud Object Storage instance in any region.
The following list shows the Portworx and HPCS prerequisites.
- The same version of Portworx Enterprise DR is installed on the ROKS clusters.
- The HPCS secret must refer to the same root key.
- The namespaces on both clusters have the same name.
Note: Db2 does not constantly write all data to the disk. A disaster recovery strategy that copies storage from one disk to another misses the data in the buffer pool that is not written to the disk. This situation might lead to data corruption. To avoid this problem, prepare Db2 for a snapshot and then do disaster recovery so that all data was written to the disk.
This Portworx asynchronous disaster recovery example has the following general steps.
- Scale down FTM for Check on the destination cluster
- Set up Portworx asynchronous disaster recovery
- Do the failover operation
Scale down FTM for Check on the destination cluster
Portworx disaster recovery requires that you completely scale down the application, such as FTM for Check, in the destination namespace. All pods that mount persistent volume claims (PVCs) must be down so that these PVCs and persistent volumes (PVs) can be deleted. Portworx later re-creates them during the migration.
On the destination cluster, do a regular deployment of IBM FTM for Check in a namespace with the same name as the source namespace.
After the deployment completes, you can start to scale down the application.
- Set the FTM disaster recovery mode to passive and wait for all the application pods to scale down.
oc patch ftmcheck <instance-name> -p '{"spec":{"dr":{"mode":"passive"}}}' --type=merge - Scale down the FTP server pod.
oc patch ftmcheck <instance-name> -p '{"spec":{"ftp_server":{"enable":false}}}' --type=mergeoc patch ftmcheck <instance-name> -p '{"spec":{"ftp_server":{"replicas":0}}}' --type=merge - Scale down the operator pods so that the FTM and the FTM for Check operators do not re-create the PVCs.
oc scale deploy ftmbase-operator --replicas 0oc scale deploy ftm-check-operator --replicas 0 - Scale down the FTM for Check artifacts pod because it uses the backup PVC.
oc scale deploy ftm-artifact-check --replicas 0 - Scale down the IBM MQ pods.
oc scale sts <instance-name>-ibm-mq --replicas 0 - Scale down the
ldap-serverdeployment, Db2U, andetcd statefulsetof the Db2uClusters.
Delete alloc get deploy | grep c-db2u oc scale deploy <deploy-name> --replicas 0 oc get sts | grep db2u oc scale sts <sts-name> --replicas 0instdbandrestore-morphpods that show a status of completed.
Wait for all these pods to terminate. After these pods are down, you can delete all the PVCs.
oc delete pvc --all
The scaled down version of the FTM for Check deployment is ready to serve as a standby.
Set up Portworx asynchronous disaster recovery
To set up Portworx asynchronous disaster recovery, you need to do the following things.
- Get Stork for your local computer
- Enable load balancing on the Portworx service
- Create credentials for IBM Cloud Object Store
- Create the ClusterPair resource
- Schedule the migration
Get Stork for your local computer
You can get Stork by logging in to any cluster and running the following commands.
oc project kube-systemexport STORK_POD=$(oc get pods -o=jsonpath='{.items[0].metadata.name}' -l name=stork)oc cp $STORK_POD:/storkctl/linux/storkctl ./storkctl
If you get an unexpected EOF error, use the following command.
oc exec $STORK_POD -- cat /storkctl/linux/storkctl > storkctl
Update the file permissions for Stork to add the executable permission by running the following command.
chmod +x storkctl
After you add the executable file permission for the storkctl file, its file permissions are -rwxrwxr-x.
Enable load balancing on the Portworx service
Create a public load balancer on both the source and destination clusters.
Edit the StorageCluster object by using the following command.
oc edit stc <portworx-stc> -n kube-system
Change the following statement to set the service type of the Portworx service to LoadBalancer.
.metadata.annotations: portworx.io/service-type: portworx-service:LoadBalancer
After you save the change, the service type of the portworx-service service changes to LoadBalancer and you can use the external IP address.
To see the service type for the services, you can run the oc get svc -n kube-system command. The following example output from the command shows where you can see that the service type for the portworx-service is LoadBalancer.
NAME TYPE CLUSTER-IP EXTERNAL-IP portworx-service LoadBalancer ###.###.###.### 1234abcd-us-south.lb.appdomain.cloud
Create credentials for IBM Cloud Object Store
IBM Cloud Object Storage is an S3 compliant object store. To create the credentials, do the following steps.
- Find the UUID of the destination cluster and save it. To find the UUID, run these commands on the destination cluster.
PX_POD=$(oc get pods -l name=portworx -n kube-system -o=jsonpath='{.items[0].metadata.name}')oc exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status | grep UUID | awk '{print $3}' - Create an instance of IBM Cloud Object Storage in the resource group of the source cluster. You do not need to create a bucket. Also, create the service credential with a role of Writer and with HMAC authentication enabled. For more information, see https://cloud.ibm.com/docs/containers?topic=containers-storage-cos-understand#create_cos_secret
- Create a cloud store credential in both the source and the destination clusters by using the following command.
The values to use for this command are shown in the following list./opt/pwx/bin/pxctl credentials create \ --provider s3 \ --s3-access-key <secret-access-key> \ --s3-secret-key <access-key-id> \ --s3-region <region> \ --s3-endpoint <endpoint> \ --s3-storage-class STANDARD \ clusterPair_<UUID-of-destination-cluster>- The provider is s3.
- For s3-access-key, use the secret access key. This key can be found in the HMAC section of the Cloud Object Storage service credentials page.
- For s3-secret-key, use the access key ID. This key ID can be found in the HMAC section of the Cloud Object Storage service credentials page.
- For s3-region, use the region name where the Cloud Object Storage was created. It can be set to
us-south. - For s3-endpoint, use the direct endpoint for the specific region or cross-region.
- For <UUID-of-destination-cluster>, use the UUID of the destination cluster that you saved in a previous step.
Create the ClusterPair resource
To create the ClusterPair, do the following steps.
- Get the destination cluster token by running the following commands on the destination cluster. You need this cluster token to configure the ClusterPair.
PX_POD=$(oc get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')oc exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl cluster token show - Generate the ClusterPair spec on the destination cluster by running the following command on the cluster.
For example:storkctl generate clusterpair -n <namespace-to-migrate> <clusterpair-name> -o yaml > clusterpair.yamlstorkctl generate clusterpair -n cluster-pair-dr cluster-pair-dr -o yaml > clusterpair.yaml - Add the following properties to the ClusterPair spec under the
.spec.optionssection. The property values must be in double quotation marks.ip:The external IP address of the load balancer on the destination cluster.port:The port for that node in the destination cluster. For example, 9001.token:The destination cluster token that you got from the destination cluster.mode:Set this property to DisasterRecovery.
- Apply the ClusterPair to the source cluster by running the following command.
oc apply -f clusterpair.yaml
Verify that the ClusterPair is ready by running the following command.
./storkctl get clusterpair -n <namespace-to-migrate>
The storage status and the scheduler status are shown as ready when the ClusterPair is ready. The following example output from the command shows where you can see the storage and scheduler status for the ClusterPair that is called cluster-pair-dr.
NAME STORAGE-STATUS SCHEDULER-STATUS CREATED cluster-pair-dr Ready Ready <Date and time that the ClusterPair was created>
You can also use the following describe command to see the status.
oc describe clusterpair <clusterpair-name> -n <namespace-to-migrate>
The following example output from the describe command shows where you can see the storage and scheduler status for the ClusterPair.
Options:
Ip: 1234abcd-us-south.lb.appdomain.cloud
Mode: DisasterRecovery
Port: 9001
Token: <The name of the token>
Platform Options:
Status:
Remote Storage Id: <The remote storage ID>
Scheduler Status: Ready
Storage Status: Ready
Events: <none>
To schedule a migration, you need to create SchedulePolicy and MigrationSchedule objects.
Create a SchedulePolicy object in the source namespace on the source cluster. Set the interval timing to the frequency and network latency that you want.
apiVersion: stork.libopenstorage.org/v1alpha1
kind: SchedulePolicy
metadata:
name: policy-hourly
namespace: <namespace-to-migrate>
policy:
interval:
intervalMinutes: 60
Apply the policy by running the following command.
oc apply -f policy-hourly.yaml
You can use the following command to verify that the SchedulePolicy object was created.
./storkctl get schedulepolicy -n <namespace-to-migrate>
The following example output from the command shows where you can see the policy-hourly schedule policy that you created.
NAME INTERVAL-MINUTES DAILY WEEKLY MONTHLY default-daily-policy N/A 12:00AM N/A N/A policy-hourly 60 N/A N/A N/A
Create a MigrationSchedule object in the source namespace on the source cluster. Creating this object starts the first migration.
apiVersion: stork.libopenstorage.org/v1alpha1
kind: MigrationSchedule
metadata:
name: <object-name>
namespace: <namespace-to-migrate>
spec:
template:
spec:
clusterPair: <clusterpair-name>
includeResources: false
includeVolumes: true
includeApplications: false
namespaces:
- <namespace-to-migrate>
schedulePolicyName: <schedulepolicy-name>
suspend: false
autoSuspend: false
Create the schedule object by running the following command.
oc apply -f migrate-dr.yaml
You can use the following command to verify that the MigrationSchedule object was created.
./storkctl get migration -n <namespace-to-migrate>
The following example output from the command shows that the first migration started when you created the MigrationSchedule object.
NAME CLUSTERPAIR STAGE STATUS VOLUMES RESOURCES migrate-dr cluster-pair-dr Volumes InProgress 0/20 N/A
The migration is now scheduled and running. Only the data from the volumes is copied at regular intervals into the destination namespace. Check the migration status by running the describe command for the created resource.
oc describe migration migrate-dr -n <migration-namespace>
The following example output from the command shows that the migration completed successfully.
Status:
Application Activated: false
Items:
Interval:
Creation Timestamp: <Date and time that the migration was created>
Finish Timestamp: <Date and time that the migration completed>
Name: migrate-dr-interval
Status: Successful
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Successful 7m stork Scheduled migration completed successfully
You can use the following command to show statistics from the migration.
./storkctl get migration -n <namespace-to-migrate>
The following example output from the command shows the 20 Portworx volumes that correspond to the 20 PVCs that are being migrated. Later, the 20 PVs and 20 PVCs are created on the destination cluster by Portworx.
NAME CLUSTERPAIR STAGE STATUS VOLUMES RESOURCES migrate-dr cluster-pair-dr Final Successful 20/20 40/40
When you need to fail over to the destination cluster, do the following steps.
- Stop the data migrations
- Migrate the Db2 SSL secret data
- Scale up the application on the destination cluster
Data migration must be stopped during the failover because data cannot be re-created when the destination namespace is fully scaled up. Delete the MigrationSchedule object in the source cluster to stop the migrations.
oc delete -f migrate-dr.yaml
Migrate the Db2 SSL secret data
Copy the data, such as certificate and TLS data, from the ftm-db2-ssl-cert-secret in the source namespace to the ftm-db2-ssl-cert-secret in the destination namespace.

Scale up the application on the destination cluster
After the previous steps are complete, scale up the application on the destination cluster.
- Scale up the IBM MQ pods.
oc scale sts <instance-name>-ibm-mq --replicas 3 - Scale up the Db2uClusters.
oc get deploy | grep c-db2u oc scale deploy <deploy-name> --replicas 1 oc get sts | grep db2u oc scale sts <sts-name> --replicas 1 - Scale up the FTM for Check artifacts pod.
oc scale deploy ftm-artifact-check --replicas 1 - Scale up the FTM and the FTM for Check operator pods. The operators re-create the PVCs.
oc scale deploy ftmbase-operator --replicas 1oc scale deploy ftm-check-operator --replicas 1 - Set the FTM disaster recovery mode to active. All the application pods start.
oc patch ftmcheck <instance-name> -p '{"spec":{"dr":{"mode":"active"}}}' --type=merge - Scale up the FTP server pod.
oc patch ftmcheck <instance-name> -p '{"spec":{"ftp_server":{"enable ":true}}}' --type=merge
After everything scales back up, the application is running on the destination cluster. You can now view the transferred data in the pods, the database, and the Control Center user interface.
Table of contents
Cloud drives and how they compare with block storage
Cloud drives are block volumes that are provisioned by using the IBM CSI driver in the IBM Cloud account of the user. They are attached to each worker node of the cluster based on the specification that is selected when Portworx is installed. These drives are maintained in the user account, and there is no repository as these drives are VPC Gen2 based CSI block volumes. For more information about privately provisioned drives, see operating cloud drives: https://docs.portworx.com/operations/operate-kubernetes/cloud-drive-operations/ibm/operate-cloud-drives/
The following table is a comparison chart for the options on the Portworx installation screen.
| Parameters | Use already attached drives. | Use cloud drives. |
| Ease-of-use | User needs to provision block volumes and attach to the worker nodes. | User provides specification to provision block volumes in the user account and those volumes are attached to each worker node. |
| Ease-of-portability | Drives once attached to the worker nodes cannot be moved automatically to storage fewer nodes. | Drives provisioned by using Portworx are maintained by Portworx. It can be used on storageless node if the storage node suffers a failure. |
| Ease-of-access | Easy to use from the catalog. | Easy to use from the catalog. |
| Auto-Scaling | No | Yes |
| Security | Yes, as the drives are in the customer account. | Yes, as the drives are in the customer account. |
| Costing | Cost for block drives based on IOPS and size at IBM, no impact on Portworx cost. | Cost for block drives based on IOPS and size at IBM, no impact on Portworx cost. |
Table of contents
Was this topic helpful?
Document Information
Modified date:
23 June 2023
UID
ibm16955955