IBM Support

FTM example deployment on IBM Cloud - Secure Landing Zone, HPCS, and Portworx

General Page

This example describes an FTM deployment that uses the Secure Landing Zone (SLZ), HPCS service, and Portworx service on IBM Cloud. You can use it as a reference when you are designing your own FTM deployment on IBM Cloud.

Contents

System configuration for this deployment

Set up an IBM Cloud account

     Set up an IBM Cloud Account for Secure Landing Zone

     Set up Account Access (Cloud IAM)

     Set up repository authorization

Install the HPCS Services and keys

     Install the HPCS Services from the IBM Cloud catalog

     Create a Hyper Protect Crypto Services root key

     Retrieve the service instance GUID

     Setting Service ID API Key permissions

Setting up SLZ for HPCS Service

     Set up SLZ toolchain

     Set up VPN certificates and store them in Secrets Manager

Install the HPCS-integrated Portworx service from the IBM Cloud catalog

     Create the px-ibm secret in the portworx namespace

     Install Portworx Enterprise from the IBM Cloud catalog

     Create the Portworx storage class

     Create your persistent volume claim (PVC)

Maintenance activities

     Restart the Portworx service

     Upgrade the Portworx Enterprise Service

     Uninstall and wipe the Portworx service

     Key rotation

Troubleshooting

     Increasing cloud drive storage

     Replacing nodes

     Expanding the size of a persistent volume claim (PVC)

Portworx asynchronous disaster recovery example with IBM FTM for Check

     Scale down FTM for Check on the destination cluster

     Set up Portworx asynchronous disaster recovery

     Do the failover operation

More Information

     Cloud drives and how they compare with block storage

 

System configuration for this deployment

Component

Used for this environment

Comments

IBM Cloud Account -Payment Model

Pay-As-You-Go

Can use Enterprise account or Pay-As-You-Go Payment Model.

IBM Cloud Account – Financial Services Validated

Enabled

Recommended being enabled

IBM Cloud Account - IAM Privileges

Part of Admin, Editor, Writer access roles.

Admin, Editor, and Writer access roles are recommended.

IBM Cloud Account –

API Key Owner

Part of admin group

Admin group is recommended.

IBM Cloud Account – Resource Group

ftm-slz-03

Can be any string. Used for distinguishing the resources.

IBM ROKS Cluster – Cluster Name

slz-03-wrkld-cluster

Can be any string.

IBM ROKS Cluster – Red Hat OpenShift Container Platform Version

4.11

Minimum 4.10 is recommended.

IBM ROKS Cluster – Cluster Zones

3 Zones (Dallas 1, Dallas2, Dallas 3)

Recommended that use three zones. The zones can be different according to your cloud account.

IBM ROKS Cluster – Master Nodes Configuration

3 Master nodes (1 per zone)
cx2.8X16
vCPU: 8

Memory: 16 GB

Disk: 100 GB

These values are the minimum requirement for the master node of ROKS cluster.

IBM ROKS Cluster – Worker Nodes Configuration

Twelve worker Nodes (4 per zone)

bx2.16X64
vCPU: 16

Memory: 64 GB

Disk: 100 GB

Can use the minimum configuration as cx2.8X16.

vCPU: 8
Memory: 16 GB

Disk: 100 GB

SLZ Toolchain – Code Repository

GitHub and GitHub Enterprise

Can use anything from:
GitHub and GitHub Enterprise, Gitlab, and BitBucket.

SLZ Toolchain - AccessToken

GitHub and GitHub Enterprise

Must be from the same repository.

SLZ Toolchain – GitHub URL

https://us-south.git.cloud.ibm.com/users/sign_in

Differs according to the cloud account and the repository chosen.

SLZ Toolchain – DNS IPs

192.0.2.0/11

It is different for the user’s cloud account and region.

HPCS – Name of Instance

HPCS-shared-fs-cloud-01

Can have any human-readable string.

HPCS – Root Key Name

FTM_ROOT_KEY

Recommended having ROOT_KEY in the name to distinguish as ROOT KEY.

HPCS-Base Endpoint URL

For the Dallas region, the value for private cluster (SLZ) is https://api.private.us-south.hs-crypto.cloud.ibm.com:12683

It is different for the user’s cloud account and region.

HPCS-Token Endpoint URL

For the Dallas region, the value for private cluster (SLZ) is https://private.us-south.iam.cloud.ibm.com/identity/token

It is different for the user’s cloud account and region.

Portworx – Service Version

Portworx - Enterprise

Recommend that you use “Portworx Enterprise with Disaster-Recovery” for production environments.

Portworx - Region

Dallas(us-south)

Must be the same region where SLZ and HPCS are configured.

Portworx-Namespace in Red Hat OpenShift

portworx

Recommended having a namespace for easy identification.

Portworx – secret name

px-ibm

Can have any human-readable string.

Portworx-Storage Class name

ibmc-vpc-block-10iops-tier

Recommended that use 10 IOPS.

Portworx-Installation Version

Portworx:2.11.4, Stork:2.12.1

It is different according to the worker node OS version. For more information, see the Portworx documentation.


Table of contents

Set up an IBM Cloud account

An IBM Cloud account is required. An Enterprise account is recommended but a Pay-As-You-Go account can be used to deploy secure landing zone (SLZ) cloud resources.

If you do not already have an account, follow the instructions to create the account and upgrade it to Pay-As-You-Go. If you have access to an IBM Cloud account, an Enterprise account is recommended but a Pay-As-You-Go account must also work with this automation.


 

Set up an IBM Cloud Account for Secure Landing Zone

  1. Log in to the IBM Cloud console that uses the IBMid that you used to set up the account. This IBMid user is the account owner and has all the IAM accesses.
  2. Complete the company profile and contact information for the account. This information is required to stay in compliance with the IBM Cloud Financial Service profile.
  3. Enable the flag to designate your IBM Cloud account to be Financial Services Validated.

 

Set up account access (Cloud IAM)

  1. Create an IBM Cloud API key. Users who own this key must be part of the admin's group. Necessary if manually provisioning.
  2. Set up MFA for all IBM Cloud IAM users.
  3. Set up Cloud IAM access groups. User access to cloud resources is controlled by using the Access policies that are assigned to Access Groups. IBM Cloud Financial Services profile requires that all IAM users do not get assigned any access directly to any cloud resources. When you assign access policies, click All Identity Access Enabled Services from the list.
  4. You need to be assigned the Editor platform access role and the Writer service access role in IBM Cloud Identity and Access Management for HPCS. For more information, see
    https://cloud.ibm.com/docs/containers?topic=containers-storage_portworx_encryption.

 

Set up repository authorization

The toolchain requires authorization to access your repository. If it does not have access, the toolchain requests that you authorize access. The following links show how you can create a personal access token for your repository.

You can manage your authorizations by using Manage Git Authorizations.


Table of contents

Install the HPCS Services and keys


 

Install the HPCS Services from the IBM Cloud catalog

Create an HPCS Service from the IBM Cloud catalog.
https://cloud.ibm.com/catalog/services/hyper-protect-crypto-services

  1. Select the us-south region.
  2. Select the Standard pricing plan.
  3. Enter the service name or keep the default randomized one.
  4. Select any resource group.
  5. Add any tags as needed for identification.
  6. For the Number of crypto units, enter 2 and leave cross-region failovers as 0.
  7. From the Allowed Networks list, select Public and private.
  8. Review the costs, accept the license agreements, and then click Create. You must be able to see the service in the IBM Cloud Console Resource List.

image-20230321184431-1


 

Create a Hyper Protect Crypto Services root key

  1. Open the HPCS instance from the Resource List.
  2. In the KMS keys tab, in the Keys table, click Add key, and then select Create a key.
  3. To create a key, enter the following specifications:
    • Key Type: root
    • Key Name: A unique name for your key.
    • Key Alias: Optionally, a human-readable string.
    • Key ring ID: Keep the default.
      image-20230321184541-2
  4. Create the key.
    image-20230321184616-3

 

Retrieve the service instance GUID

Use the following command to get this GUID.

ibmcloud resource service-instance <hpcs-instance-name>

image-20230321184630-4

 

Setting Service ID API Key permissions

  1. In IAM, select Service IDs and then click Create.
  2. Add a description and name, then click Create. Also, add these permissions for Access Groups and Access policies.
    image-20230321184704-5
  3. Click the service ID and find Manage service ID.
  4. Click API Keys. Create one API key and download it for later use.

Table of contents

Setting up SLZ for HPCS Service


 

Set up the SLZ toolchain

  1. Inside IBM Cloud, create a toolchain resource.
    1. Search for “Deploy infrastructure as code for the IBM Cloud for Financial Services”.
      1. Give the toolchain a meaningful name, such as slz-toolchain-slz-03.
      2. Select a region.
      3. Create a meaningful resource group. For example, ftm-slz-03.
      4. Click Continue.
    2. Select the deployment pattern (typically ROKS on VPC), and then click Continue.
    3. Set up the Git repository for the platform code.
      1. Leave the default Git. It defaults to https://ussouth.git.cloud.ibm.com/users/sign_in.
      2. Enter a meaningful repository name. For example, slz-roks-slz-03.
        This repository is created in the current user’s Git workspace.
    4. Give it an API key with access to provision within the IBM Cloud account.
    5. Click Create Toolchain.
  2. Change code in the repository for the SLZ environment. For example, the naming convention is slz-01, slz-02, and so on. Consider the node flavor when you are doing the master setup.
    1. Update the patterns/roks/terraform.tfvars file or other for a different pattern. You need to use a toolchain because you cannot use the source open-toolchain repo code directly from a Terraform repository. This code cannot be used as a Terraform module. An example for slz-03 is shown in the following figure.

      image-20230321184749-6

    2. Commit the code back to the repository.
  3. Run the infrastructure toolchain in IBM Cloud.
    1. In the Toolchain user interface, search for Delivery Pipeline infrastructure-pipeline and then click it.
    2. Click Run pipeline.
      1. This process might take a few hours.
      2. In some cases, it might fail. Running the pipeline a second or third time usually allows it to complete successfully.

 

Set up VPN certificates and store them in Secrets Manager

For more information, see Managing VPN server and client certificates.

Complete the following steps to set up VPN certificates and store them in Secrets Manager.

  1. Create an Easy-RSA certificate for the VPN connection.
    1. Clone the Easy-RSA 3 repository into your local folder.
      git clone https://github.com/OpenVPN/easy-rsa.git
      cd easy-rsa/easyrsa3
    2. Create a PKI and CA.
      ./easyrsa init-pki
      ./easyrsa build-ca nopass
    3. Check that the CA certificate file was generated ./pki/ca.crt.
    4. Generate a VPN server certificate. Use the name of VPN as the server name.
      ./easyrsa build-server-full slz-03-wrkld-01-vpn nopassvpn-server.vpn.ibm.com nopass
    5. Check that the following files were generated.
      1. VPN server public key file: ./pki/issued/<VPN NAME>.crt
      2. Private key file: ./pki/private/<VPN NAME>.key
  2. Import certificates into the Secrets Manager.
    1. Go into the Secrets Manager instance that you want to use.
    2. Click Add +.
    3. Select TLS Certificate.
    4. Select Import a certificate.
      1. Provide a certificate name. For example, slz-01-wrkld-vpn-02 (VPN name + number of certs).
      2. Select a secrets group (named per SLZ environment for isolation).
      3. Click Browse and then select ./pki/issued/<VPN NAME>.crt as the certificate file.
      4. Click Browse and then select ./pki/private/<VPN NAME>.key as the private key of the certificate.
      5. Click Browse and then select ./pki/ca.crt as the intermediate certificate.
      6. Optional: Enter a description.
      7. Click Import.
  3. Set up a VPN Server.

    The VPN server is set up in the Management VPC and specifically the Management VPN Subnet for the SLZ cluster that you created.

    1. Create the VPN Server.
      1. Go to IBM Cloud > VPC > VPNs.
      2. Go to the Client to site servers tab.
      3. Click Create.
        1. Give it a meaningful name. For example, slz-01-mgmt-01-vpn.
        2. Select the correct geography based on where the Management VPC is located.
        3. Select the management resource group for the SLZ environment that you set up. For example, slz-01-mgmt-rg.
        4. Select the stand-alone mode.
        5. Select the subnets to attach. These subnets must be the VSI subnets and not the VPE subnets. You need to search in the cloud user interface for the new VPC, and identify them by Classless Inter-Domain Routing (CIDR) (For example, *.*.*.0/24) in the VPN user interface.
        6. In the authentication section.
          1. The certificate source must be Secrets Manager.
          2. Select ftm-secrets-manager-01.
          3. Pick the server SSL certificate that you created in the step Setup VPN Certificates and store them in Secrets Manager.
          4. Clear Client Certificate.
          5. Select User ID and passcode. The user ID and passcode that is generated from IBM Cloud Passcode is used.
        7. In additional configuration, provide the following values.
          • DNS server 1: Enter 161.26.0.10. For more information about the DNS server addresses in IBM Cloud, see this link.
          • DNS server 2: Enter 161.26.0.11.
          • Transport protocol: UDP
          • VPN Port: 443
          • Tunnel mode: Split tunnel
        8. Click Create VPN Server.
    2. Go to your newly created VPN Server config.
      1. Go to the VPN Server Routes tab.
      2. Create five routes in total. Two are general values that you copy from these instructions and three are subnet IPs that you need to look up.
        1. Create DNS.
          1. Click Create.
          2. Name: DNS
          3. Destination CIDR: XXX.XXX.XXX.XXX/24
          4. Action: Translate
        2. Create Master.
          1. Click Create.
          2. Name: Master
          3. Destination CIDR: XXX.XXX.XXX.XXX/14
          4. Action: Translate
        3. Create to-wrkld-<slz-env-number>-<zone 1>:
          1. Click Create.
          2. Name: to-wrkld-<slz-env-number>-<zone 1> (For example, to-wrkld-03-dal01).
          3. Destination CIDR: get VSI and not the VPE subnet CIDR for the correct zone. For example, *.*.*.*/24
          4. Action: Translate
        4. Create to-wrkld-<slz-env-number>-<zone 2>:
          1. Click Create.
          2. Name: to-wrkld-<slz-env-number>-<zone 2> (For example, to-wrkld-03-dal02).
          3. Destination CIDR: get VSI and not VPE subnet CIDR for the correct zone. For example, *.*.*.*/24
          4. Action: Translate
        5. Create to-wrkld-<slz-env-number>-<zone 3>:
          1. Click Create.
          2. Name: to-wrkld-<slz-env-number>-<zone 3> (For example, to-wrkld-03-dal03).
          3. Destination CIDR: get VSI not VPE subnet CIDR for the correct zone. For example, *.*.*.*/24
          4. Action: Translate
      3. Go to the Attached security groups tab.
        1. Click the security group (it has some random name).
        2. Go to the Rules tab.
        3. Create an Inbound rule.
          • Protocol: UDP
          • Port Range: 443 - 443
          • Source type: Any
          • Action: Create
    3. Set up subnets for the VPN.
      1. Identify any of the VSI subnets for your cluster and go into it.
      2. Find the link to the subnet access control list attached to that subnet.
      3. It has a name such as slz-03-mgmt-acl.
      4. Observe the contents of the ACL carefully.
    4. Set up the VPN client.
      1. Go back to IBM Cloud > VPC > VPNs > Your new VPN server.
      2. Click the Clients tab.
      3. Click Download client profile. You can download the profile that is required for a VPN client to connect to this VPN.
      4. Distribute this profile to everyone who needs it.
      5. To log in, use the w3 username and the passcode generated from the IBM Cloud Passcode link.

Table of contents

Install the HPCS-integrated Portworx service from the IBM Cloud catalog


 

Create the px-ibm secret in the portworx namespace

Create the portworx namespace if it is not created already.
 

oc create -f - << EOF
  apiVersion: v1
  kind: Secret
  metadata:
    name: px-ibm
    namespace: portworx
  type: Opaque
  data:
    IBM_SERVICE_API_KEY: $(echo -n <API_KEY> | base64)
    IBM_INSTANCE_ID: $(echo -n <GUID> | base64)
    IBM_CUSTOMER_ROOT_KEY: $(echo -n <CUSTOMER_ROOT_KEY>| base64)
    IBM_BASE_URL: $(echo –n <BASE_ENDPOINT> | base64)
    IBM_TOKEN_URL: $(echo -n <TOKEN_ENDPOINT> | base64)
  EOF
 

Where

  1. <API_KEY> is the Service ID API Key (with the right permissions).
  2. <GUID> is a section of the ID of the HPCS service. For example, if the ID is
    crn:v1:bluemix:public:hs-crypto:us-south:a/10cdfea524a24e62aecaff8b7e70f660:31e7fb32-b4bf-422f-9285-7484c1e955ce::
    use the string 31e7fb32-b4bf-422f-9285-7484c1e955ce to do the encoding.
  3. <CUSTOMER_ROOT_KEY> is the ID of the root key.
  4. <BASE_ENDPOINT> is the KMS endpoint URL. Do not remove the protocol and port. Find it on the HPCS service Overview page. For the Dallas region, the value for private cluster (SLZ) is https://api.private.us-south.hs-crypto.cloud.ibm.com:12683.
  5. <TOKEN_ENDPOINT> is the token endpoint.
    For the Dallas region, the value for private cluster (SLZ) is https://private.us-south.iam.cloud.ibm.com/identity/token. image-20230321184838-7

 

Install Portworx Enterprise from the IBM Cloud catalog

Note: If you are using Db2U storage, the Db2U storage classes have the parameter repl: 3. Make sure that at least 3 nodes have storage.

  1. Select the same region as the cluster and HPCS service.
  2. Select the pricing plan as Enterprise. (You need to use Enterprise-DR for a production environment)
  3. Set a service name for your identification purposes.
  4. Add the tags: cluster: <cluster-ID> for identification later.
  5. Set the resource group to the resource group of the cluster.
  6. Enter the API Key for your account. The Red Hat OpenShift cluster field is populated with the cluster name.
  7. For the Portworx cluster name, enter any string. It prefixes your Portworx pod names. For example, portworx-slz-01.
  8. For namespace, enter the namespace where you want Portworx to be installed. It is recommended to keep it as kube-system.
  9. If you did not attach any block storage manually, you can do so in a managed way. For cloud drives, you can select use Cloud Drives.
    • Number of drives: Number of drives to provision on each node in each worker node. Depending on this integer, you might get multiple storage class and size fields.
    • Max Storage Nodes Per Zone: Number of nodes you want per zone to be equipped with the number of drives.
    • Storage Class name: The class of storage for this drive. Select ibmc-vpc-block-10iops-tier.
    • Size: Enter the size in Gi (1000 MB) for each drive.
  10. For metadata key-value, select the Portworx KVDB.
  11. For Secrets Store, select IBM Key Protect | HPCS.
  12. Leave the Advanced options, CSI enable, and Helm parameters fields.
  13. Select the Portworx version. For worker machines with RHEL8, select 2.11.4.
  14. The service must show status as Active in 5-15 minutes. image-20230321184921-8
  15. In the specified project, you can see the Portworx pods. Ensure that they are running and ready. Also, check whether the Portworx StorageCluster is online.
    oc get po -n kube-system -lname=portworx
    oc get stc -n kube-system
    oc get storagenodes -n kube-system

To verify whether HPCS is authenticated to get into any worker node and find the entry in the journal, use the following commands.

oc debug node/<any-worker-node>
chroot /host
journalctl -leu portworx* | grep -i authentic

Look for the authenticated message:

image-20230321184943-9

If you see any other error about authentication, re-create the secret in the kube-system namespace and restart Portworx. For more information, see Restart the Portworx service.


 

Create the Portworx storage class

Create a Portworx-specific storage class by using the following command.

oc create –f - << EOF
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata: name: px-sharedv4-sc
    provisioner: kubernetes.io/portworx-volume
    parameters:
      repl: "1"
      sharedv4: "true"
      sharedv4_svc_type: "ClusterIP"
    EOF
    

 

Create your persistent volume claim

Create a persistent volume claim (PVC) by using the previously created storageclass px-sharedv4-sc. The volume that is associated to the persistent volume (PV) is encrypted by a unique secret.

oc create -f - << EOF
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: sample-pvc
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 5Gi
      storageClassName: px-sharedv4-sc
    ---
    EOF

Describe the PV claimed by the PVC and find the value of Volume ID.

oc describe pv <pv-name> | grep VolumeID

    oc debug node/<any-node-ip>
    chroot /host
    pxctl volume inspect <volume-id>

image-20230321185015-10


Table of contents

Maintenance activities


 

Restart the Portworx service

To restart the Portworx service, simply add an environment variable in the StorageCluster.spec.env and save it. All Portworx pods restart.

image-20230321185026-11


 

Upgrade the Portworx Enterprise Service

To upgrade the Portworx service, update the version in the StorageCluster.

oc edit stc <storagecluster-name>
.spec.version: <new-version>

image-20230321185035-12

Save the changes. The Portworx pods restart one at a time.


 

Uninstall and wipe the Portworx service

No objects of the following type related to Portworx must appear in these commands:

oc get po, ds, stc, storagenode, pdb
oc get cm | grep px-bootstrap    
oc get cm | grep px-cloud-drive
oc get secrets | grep helm

The Portworx operator deployment does not need to be deleted.

  1. Update the storage cluster and add the delete strategy.
    oc get stc
    oc edit stc <storagecluster-name>
    image-20230321185422-14
  2. Delete the storage cluster. Wait for the wiper pods to finish and for all the Portworx pods to be deleted.
    oc delete stc <storagecluster-name>
    watch oc get po -n kube-system
  3. Delete the Portworx Enterprise service.
    image-20230321185306-13

 

Key rotation

Key rotation is done to rotate your data encryption keys.

Open your HPCS service and go to KMS Keys tab. Click the 3-dot menu for the specific key and click Rotate Key. You must see the last updated timestamp change. Your application remains unaffected.
image-20230321185446-15


Table of contents

Troubleshooting


 

Increasing cloud drive storage

You cannot change the size of the provisioned cloud drives after they are set. You can increase only the number of nodes per zone. You cannot decrease it.

oc edit storagecluster/<portworx-storage-cluster> -n kube-system

You can increase the max_storage_nodes_per_zone in the annotation to have more worker nodes per zone with storage drives. The size cannot be changed, so do not update the size value in the annotation.

image-20230321185837-16


 

Replacing nodes

You can safely replace a node with or without storage attached. These steps can be used when the Red Hat OpenShift version of the node is updated or while a faulty node is replaced.

  1. Choose the node that you want to replace and use the debug command to access it.
    oc debug node/<node-ip>

    image-20230321190053-17
  2. Enter Portworx maintenance mode by using the following command.
        pxctl service maintenance –enter
  3. Update or replace the node. Wait for a new node to start and go to Ready in IBM Cloud console. The wait might take 5-10 minutes.
    ibmcloud ks worker --update -w <node-id> -c <cluster-id>
    image-20230321190141-21
    ibmcloud ks worker ls -c <cluster-id>
    image-20230321190157-22
    Portworx and a cloud drive are attached to the node to maintain the storage nodes per zone value.
  4. Exit maintenance mode on any node that is still in maintenance.
            oc rsh <pod-for-node-still-in-maintenance>
            /opt/pwx/bin/pxctl service maintenance --exit
    image-20230321190220-23 image-20230321190230-24
  5. The status shows all nodes as online. The newest one provides storage.
    /opt/pwx/bin/pxctl status
    image-20230321190247-25

 

Expanding the size of a persistent volume claim (PVC)

The following steps are for resizing the PVC.

  1. On the Red Hat OpenShift web console, go to your PVC and select Actions > Expand PVC. Set the value in GiB to the value that you want to PVC to increase to.
  2. In the Portworx CLI, edit the PVC size.
    oc debug node/<any-node-ip>
    chroot /host
    pxctl volume update <volume-id> --size=<size-in-GiB>

Table of contents

Portworx asynchronous disaster recovery example with IBM FTM for Check


 

You can set up Portworx asynchronous disaster recovery with a unidirectional ClusterPair and test it by using IBM FTM for Check 4.0.5.1 with IBM Db2U HADR.

The following list shows the infrastructure that you need.

  • Two Red Hat OpenShift Kubernetes Service (ROKS) clusters in the same or different regions. For more information about a system configuration that you can use to configure these clusters, see System configuration for this deployment.
  • IBM Cloud Object Storage instance in any region.

The following list shows the Portworx and HPCS prerequisites.

  • The same version of Portworx Enterprise DR is installed on the ROKS clusters.
  • The HPCS secret must refer to the same root key.
  • The namespaces on both clusters have the same name.

Note: Db2 does not constantly write all data to the disk. A disaster recovery strategy that copies storage from one disk to another misses the data in the buffer pool that is not written to the disk. This situation might lead to data corruption. To avoid this problem, prepare Db2 for a snapshot and then do disaster recovery so that all data was written to the disk.

This Portworx asynchronous disaster recovery example has the following general steps.

  1. Scale down FTM for Check on the destination cluster
  2. Set up Portworx asynchronous disaster recovery
  3. Do the failover operation


 

Scale down FTM for Check on the destination cluster

Portworx disaster recovery requires that you completely scale down the application, such as FTM for Check, in the destination namespace. All pods that mount persistent volume claims (PVCs) must be down so that these PVCs and persistent volumes (PVs) can be deleted. Portworx later re-creates them during the migration.

On the destination cluster, do a regular deployment of IBM FTM for Check in a namespace with the same name as the source namespace.

After the deployment completes, you can start to scale down the application.

  1. Set the FTM disaster recovery mode to passive and wait for all the application pods to scale down.
    oc patch ftmcheck <instance-name> -p '{"spec":{"dr":{"mode":"passive"}}}' --type=merge
  2. Scale down the FTP server pod.
    oc patch ftmcheck <instance-name> -p '{"spec":{"ftp_server":{"enable":false}}}' --type=merge
    oc patch ftmcheck <instance-name> -p '{"spec":{"ftp_server":{"replicas":0}}}' --type=merge
  3. Scale down the operator pods so that the FTM and the FTM for Check operators do not re-create the PVCs.
    oc scale deploy ftmbase-operator --replicas 0
    oc scale deploy ftm-check-operator --replicas 0
  4. Scale down the FTM for Check artifacts pod because it uses the backup PVC.
    oc scale deploy ftm-artifact-check --replicas 0
  5. Scale down the IBM MQ pods.
    oc scale sts <instance-name>-ibm-mq --replicas 0
  6. Scale down the ldap-server deployment, Db2U, and etcd statefulset of the Db2uClusters.
    oc get deploy | grep c-db2u
    oc scale deploy <deploy-name> --replicas 0
    oc get sts | grep db2u
    oc scale sts <sts-name> --replicas 0
    Delete all instdb and restore-morph pods that show a status of completed.

Wait for all these pods to terminate. After these pods are down, you can delete all the PVCs.

oc delete pvc --all

The scaled down version of the FTM for Check deployment is ready to serve as a standby.


 

Set up Portworx asynchronous disaster recovery

To set up Portworx asynchronous disaster recovery, you need to do the following things.


 

Get Stork for your local computer

You can get Stork by logging in to any cluster and running the following commands.

oc project kube-system

export STORK_POD=$(oc get pods -o=jsonpath='{.items[0].metadata.name}' -l name=stork)

oc cp $STORK_POD:/storkctl/linux/storkctl ./storkctl

If you get an unexpected EOF error, use the following command.

oc exec $STORK_POD -- cat /storkctl/linux/storkctl > storkctl

Update the file permissions for Stork to add the executable permission by running the following command.

chmod +x storkctl
After you add the executable file permission for the storkctl file, its file permissions are -rwxrwxr-x.


 

Enable load balancing on the Portworx service

Create a public load balancer on both the source and destination clusters.

Edit the StorageCluster object by using the following command.

oc edit stc <portworx-stc> -n kube-system

Change the following statement to set the service type of the Portworx service to LoadBalancer.

.metadata.annotations: portworx.io/service-type: portworx-service:LoadBalancer

After you save the change, the service type of the portworx-service service changes to LoadBalancer and you can use the external IP address.

To see the service type for the services, you can run the oc get svc -n kube-system command. The following example output from the command shows where you can see that the service type for the portworx-service is LoadBalancer.

NAME                 TYPE             CLUSTER-IP         EXTERNAL-IP
portworx-service     LoadBalancer     ###.###.###.###    1234abcd-us-south.lb.appdomain.cloud


 

Create credentials for IBM Cloud Object Store

IBM Cloud Object Storage is an S3 compliant object store. To create the credentials, do the following steps.

  1. Find the UUID of the destination cluster and save it. To find the UUID, run these commands on the destination cluster.
    PX_POD=$(oc get pods -l name=portworx -n kube-system -o=jsonpath='{.items[0].metadata.name}')
    
    oc exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status | grep UUID | awk '{print $3}'
  2. Create an instance of IBM Cloud Object Storage in the resource group of the source cluster. You do not need to create a bucket. Also, create the service credential with a role of Writer and with HMAC authentication enabled. For more information, see https://cloud.ibm.com/docs/containers?topic=containers-storage-cos-understand#create_cos_secret
  3. Create a cloud store credential in both the source and the destination clusters by using the following command.
      /opt/pwx/bin/pxctl credentials create \
    --provider s3 \
    --s3-access-key <secret-access-key> \
    --s3-secret-key <access-key-id> \
    --s3-region <region> \
    --s3-endpoint <endpoint> \
    --s3-storage-class STANDARD \
    clusterPair_<UUID-of-destination-cluster>
    The values to use for this command are shown in the following list.
    • The provider is s3.
    • For s3-access-key, use the secret access key. This key can be found in the HMAC section of the Cloud Object Storage service credentials page.
    • For s3-secret-key, use the access key ID. This key ID can be found in the HMAC section of the Cloud Object Storage service credentials page.
    • For s3-region, use the region name where the Cloud Object Storage was created. It can be set to us-south.
    • For s3-endpoint, use the direct endpoint for the specific region or cross-region.
    • For <UUID-of-destination-cluster>, use the UUID of the destination cluster that you saved in a previous step.

 

Create the ClusterPair resource

To create the ClusterPair, do the following steps.

  1. Get the destination cluster token by running the following commands on the destination cluster. You need this cluster token to configure the ClusterPair.
    PX_POD=$(oc get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
    
    oc exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl cluster token show
  2. Generate the ClusterPair spec on the destination cluster by running the following command on the cluster.
    storkctl generate clusterpair -n <namespace-to-migrate> <clusterpair-name> -o yaml > clusterpair.yaml
    For example:
    storkctl generate clusterpair -n cluster-pair-dr cluster-pair-dr -o yaml > clusterpair.yaml
  3. Add the following properties to the ClusterPair spec under the .spec.options section. The property values must be in double quotation marks.
    • ip: The external IP address of the load balancer on the destination cluster.
    • port: The port for that node in the destination cluster. For example, 9001.
    • token: The destination cluster token that you got from the destination cluster.
    • mode: Set this property to DisasterRecovery.
  4. Apply the ClusterPair to the source cluster by running the following command.
    oc apply -f clusterpair.yaml

Verify that the ClusterPair is ready by running the following command.

./storkctl get clusterpair -n <namespace-to-migrate>
The storage status and the scheduler status are shown as ready when the ClusterPair is ready. The following example output from the command shows where you can see the storage and scheduler status for the ClusterPair that is called cluster-pair-dr.
NAME                STORAGE-STATUS     SCHEDULER-STATUS     CREATED
cluster-pair-dr     Ready              Ready                <Date and time that the ClusterPair was created>

You can also use the following describe command to see the status.

oc describe clusterpair <clusterpair-name> -n <namespace-to-migrate>
The following example output from the describe command shows where you can see the storage and scheduler status for the ClusterPair.
  Options:
    Ip:    1234abcd-us-south.lb.appdomain.cloud
    Mode:  DisasterRecovery
    Port:  9001
    Token: <The name of the token>
  Platform Options:
Status:
  Remote Storage Id:  <The remote storage ID>
  Scheduler Status:   Ready
  Storage Status:     Ready
Events:               <none>


 

Schedule the migration

To schedule a migration, you need to create SchedulePolicy and MigrationSchedule objects.


 

Create a SchedulePolicy

Create a SchedulePolicy object in the source namespace on the source cluster. Set the interval timing to the frequency and network latency that you want.

apiVersion: stork.libopenstorage.org/v1alpha1
kind: SchedulePolicy
metadata:
  name: policy-hourly
  namespace: <namespace-to-migrate>
policy:
  interval:
    intervalMinutes: 60

Apply the policy by running the following command.

oc apply -f policy-hourly.yaml

You can use the following command to verify that the SchedulePolicy object was created.

./storkctl get schedulepolicy -n <namespace-to-migrate>
The following example output from the command shows where you can see the policy-hourly schedule policy that you created.
NAME                   INTERVAL-MINUTES   DAILY     WEEKLY   MONTHLY
default-daily-policy   N/A                12:00AM   N/A      N/A
policy-hourly          60                 N/A       N/A      N/A


 

Create a MigrationSchedule

Create a MigrationSchedule object in the source namespace on the source cluster. Creating this object starts the first migration.

apiVersion: stork.libopenstorage.org/v1alpha1
kind: MigrationSchedule
metadata:
  name: <object-name>
  namespace: <namespace-to-migrate>
spec:
  template:
    spec:
      clusterPair: <clusterpair-name>
      includeResources: false
      includeVolumes: true
      includeApplications: false
      namespaces:
      - <namespace-to-migrate>
  schedulePolicyName: <schedulepolicy-name>
  suspend: false
  autoSuspend: false

Create the schedule object by running the following command.

oc apply -f migrate-dr.yaml

You can use the following command to verify that the MigrationSchedule object was created.

./storkctl get migration -n <namespace-to-migrate>
The following example output from the command shows that the first migration started when you created the MigrationSchedule object.
NAME           CLUSTERPAIR         STAGE       STATUS         VOLUMES   RESOURCES
migrate-dr     cluster-pair-dr     Volumes     InProgress     0/20      N/A

The migration is now scheduled and running. Only the data from the volumes is copied at regular intervals into the destination namespace. Check the migration status by running the describe command for the created resource.

oc describe migration migrate-dr -n <migration-namespace>
The following example output from the command shows that the migration completed successfully.

Status:
  Application Activated:   false
  Items:
    Interval:
      Creation Timestamp:   <Date and time that the migration was created>
      Finish Timestamp:     <Date and time that the migration completed>
      Name:                 migrate-dr-interval
      Status:               Successful
Events:
  Type       Reason         Age    From      Message
  ----       ------         ----   ----      -------
  Normal     Successful     7m     stork     Scheduled migration completed successfully

You can use the following command to show statistics from the migration.

./storkctl get migration -n <namespace-to-migrate>
The following example output from the command shows the 20 Portworx volumes that correspond to the 20 PVCs that are being migrated. Later, the 20 PVs and 20 PVCs are created on the destination cluster by Portworx.
NAME           CLUSTERPAIR         STAGE       STATUS         VOLUMES   RESOURCES
migrate-dr     cluster-pair-dr     Final       Successful     20/20     40/40


 

Do the failover operation

When you need to fail over to the destination cluster, do the following steps.


 

Stop the data migrations

Data migration must be stopped during the failover because data cannot be re-created when the destination namespace is fully scaled up. Delete the MigrationSchedule object in the source cluster to stop the migrations.

oc delete -f migrate-dr.yaml


 

Migrate the Db2 SSL secret data

Copy the data, such as certificate and TLS data, from the ftm-db2-ssl-cert-secret in the source namespace to the ftm-db2-ssl-cert-secret in the destination namespace.
Red Hat OpenShift secret detail YAML page for ftm-db2-ssl-cert-secret


 

Scale up the application on the destination cluster

After the previous steps are complete, scale up the application on the destination cluster.

  1. Scale up the IBM MQ pods.
    oc scale sts <instance-name>-ibm-mq --replicas 3
  2. Scale up the Db2uClusters.
    oc get deploy | grep c-db2u
    oc scale deploy <deploy-name> --replicas 1
    oc get sts | grep db2u
    oc scale sts <sts-name> --replicas 1
  3. Scale up the FTM for Check artifacts pod.
    oc scale deploy ftm-artifact-check --replicas 1
  4. Scale up the FTM and the FTM for Check operator pods. The operators re-create the PVCs.
    oc scale deploy ftmbase-operator --replicas 1
    oc scale deploy ftm-check-operator --replicas 1
  5. Set the FTM disaster recovery mode to active. All the application pods start.
    oc patch ftmcheck <instance-name> -p '{"spec":{"dr":{"mode":"active"}}}' --type=merge
  6. Scale up the FTP server pod.
    oc patch ftmcheck <instance-name> -p '{"spec":{"ftp_server":{"enable ":true}}}' --type=merge

After everything scales back up, the application is running on the destination cluster. You can now view the transferred data in the pods, the database, and the Control Center user interface.


Table of contents

More Information


 

Cloud drives and how they compare with block storage

Cloud drives are block volumes that are provisioned by using the IBM CSI driver in the IBM Cloud account of the user. They are attached to each worker node of the cluster based on the specification that is selected when Portworx is installed. These drives are maintained in the user account, and there is no repository as these drives are VPC Gen2 based CSI block volumes. For more information about privately provisioned drives, see operating cloud drives: https://docs.portworx.com/operations/operate-kubernetes/cloud-drive-operations/ibm/operate-cloud-drives/

The following table is a comparison chart for the options on the Portworx installation screen.

Parameters Use already attached drives. Use cloud drives.
Ease-of-use User needs to provision block volumes and attach to the worker nodes. User provides specification to provision block volumes in the user account and those volumes are attached to each worker node.
Ease-of-portability Drives once attached to the worker nodes cannot be moved automatically to storage fewer nodes. Drives provisioned by using Portworx are maintained by Portworx. It can be used on storageless node if the storage node suffers a failure.
Ease-of-access Easy to use from the catalog. Easy to use from the catalog.
Auto-Scaling No Yes
Security Yes, as the drives are in the customer account. Yes, as the drives are in the customer account.
Costing Cost for block drives based on IOPS and size at IBM, no impact on Portworx cost. Cost for block drives based on IOPS and size at IBM, no impact on Portworx cost.

Table of contents

[{"Type":"MASTER","Line of Business":{"code":"LOB10","label":"Data and AI"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSPKQ5","label":"IBM Financial Transaction Manager"},"ARM Category":[{"code":"a8m50000000Kz8pAAC","label":"Installation"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"4.0.5"}]

Document Information

Modified date:
23 June 2023

UID

ibm16955955