Installing API Connect on the warm-standby data center

Follow the installation steps for API Connect described in the Cloud Pak for Integration documentation, adding the multiSiteHA configuration to the API Connect YAML.

Before you begin

Verify that all the secrets, certificates, and issuers are ready:
oc get certificate -n <namespace> | grep <apic-instance-name>
NAME                           READY   SECRET                         AGE     EXPIRATION
apic-mgmt-replication-client   True    apic-mgmt-replication-client   57s   2024-08-24T09:45:03Z
apic-ptl-replication-client    True    apic-ptl-replication-client    45s   2024-08-24T09:45:17Z

[root@api.iaincp4ilarge2.cp.fyre.ibm.com ~]# oc get secrets  -n <namespace> | grep <apic-instance-name>
NAME                                 TYPE                                  DATA   AGE
apic-ingress-ca                      kubernetes.io/tls                     3      4m6s
apic-mgmt-replication-client         kubernetes.io/tls                     3      2m40s
apic-ptl-replication-client          kubernetes.io/tls                     3      2m26s

[root@api.iaincp4ilarge2.cp.fyre.ibm.com ~]# oc get issuer -n <namespace> | grep <apic-instance-name>
NAME                  READY   AGE
apic-ingress-issuer   True    4m8s
Where <apic-instance-name> is the name you intend to use for your API Connect cluster CR, and <namespace> is the namespace you created for API Connect.

Ensure that your network is redirecting the custom hostname to the Platform UI in your warm-standby data center.

About this task

In the yaml files that are shown here, replace <apic-instance-name> with the name you intend to use for your API Connect Cluster CR. As decided in Planning and initial preparation. Set <active data center ingress domain> and <warm-standby data center ingress domain> to their appropriate values, which can be determined by running this command in each data center:
oc get ingresses.config/cluster -o jsonpath={.spec.domain}

Procedure

  1. Follow the API Connect installation steps using the Platform UI, as described in the Cloud Pak for Integration documentation https://www.ibm.com/docs/en/cloud-paks/cp-integration. At the point where you specify the configuration properties of your API Connect instance, switch to the YAML tab so you can edit the YAML directly.
    Remember: Use the same <apic-instance-name> value when you specify the API Connect instance name in the Cloud Pak for Integration Platform UI.
  2. Add the management multiSiteHA section to the YAML file under the spec: section.
      management:
        encryptionSecret:
          secretName: mgmt-encryption-key
        multiSiteHA:
          mode: passive
          replicationEndpoint:
            annotations:
              cert-manager.io/issuer: <apic-instance-name>-ingress-issuer
            hosts:
            - name: mgmt-replication.<warm-standby data center ingress domain>
              secretName: <apic-instance-name>-mgmt-replication-server
          replicationPeerFQDN: mgmt-replication.<active data center ingress domain>
          tlsClient:
            secretName: <apic-instance-name>-mgmt-replication-client
    Note: Warm-standby is referred to as 'passive' in the CR YAML.
  3. Add the portal multiSiteHA section to the YAML file under the spec: section:
      portal:
        portalAdminEndpoint:
          annotations:
            cert-manager.io/issuer: <apic-instance-name>-ingress-issuer
          hosts:
          - name: <external load balanced portal admin hostname>
            secretName: portal-admin
        portalUIEndpoint:
          annotations:
            cert-manager.io/issuer: <apic-instance-name>-ingress-issuer
          hosts:
          - name: <external portal load balanced web hostname>
            secretName: portal-web
        encryptionSecret:
          secretName: ptl-encryption-key
        multiSiteHA:
          mode: passive
          replicationEndpoint:
            annotations:
              cert-manager.io/issuer: <apic-instance-name>-ingress-issuer
            hosts:
            - name: ptl-replication.<warm-standby data center ingress domain>
              secretName: <apic-instance-name>-ptl-replication-server
          replicationPeerFQDN: ptl-replication.<active data center ingress domain>
          tlsClient:
            secretName: <apic-instance-name>-ptl-replication-client
  4. Disable the configurator service, as this will have already run on the active site.
    1. Add the following section to the API Connect cluster CR:
        disabledServices:
          - configurator
  5. Continue the installation steps as described in Deploying on OpenShift and Cloud Pak for Integration, then return to these steps.
  6. Apply the Cloud Pak for Integration API Connect credentials that you extracted from your active data center in step 5 of Installing API Connect on the active data center.
    oc -n <namespace> apply -f active-cp4i-apic-creds.yaml

Results

Installation can take around 1 hour to complete.

While the management subsystems on the warm-standby and active data centers are synchronizing, the management status reports Warning, and the haStatus reports pending:
oc get mgmt -n <namespace>

NAME         READY   STATUS    VERSION      RECONCILED VERSION   MESSAGE                                                                          AGE
management   n/n     Warning   10.0.5.8-0   10.0.5.8-0           Management is ready. HA Status Warning - see HAStatus in CR for details   8m59s

status:
  haStatus
    {
      "lastTransitionTime": "2024-08-20T19:47:08Z",
      "message": "Replication not working, install or upgrade in progress.",
      "reason": "na",
      "status": "True",
      "type": "Pending"
   }
When the management database replication between sites is complete, the management status reports Running, and status.haStatus reports Ready:
NAME         READY   STATUS    VERSION      RECONCILED VERSION   MESSAGE                                                                          AGE
management   n/n     Running   10.0.5.8-0   10.0.5.8-0           Management is ready.              8m59s

status:
  haStatus
  {
    "lastTransitionTime": "2024-08-20T19:47:08Z",
    "message": "Replication is working",
    "reason": "na",
    "status": "True",
    "type": "Ready"
  }
After the management subsystems in the active and warm-standby sites are synchronized, the other subsystems are deployed. Verify on both active and warm-standby sites that all subsystems reach Running state:
oc get all -n <namespace>
...
NAME                                                             READY   STATUS    VERSION    RECONCILED VERSION   AGE
analyticscluster.analytics.apiconnect.ibm.com/apis-minimum-a7s   n/n     Running   10.0.5.8   10.0.5.8-1281        27m

NAME                                                READY   STATUS   VERSION    RECONCILED VERSION   AGE
apiconnectcluster.apiconnect.ibm.com/apis-minimum   n/n     Ready    10.0.5.8   10.0.5.8-1281        2d20h

NAME                                                 PHASE     READY   SUMMARY                           VERSION    AGE
datapowerservice.datapower.ibm.com/apis-minimum-gw   Running   True    StatefulSet replicas ready: 1/1   10.5.0.12   25m

NAME                                                 PHASE     LAST EVENT   WORK PENDING   WORK IN-PROGRESS   AGE
datapowermonitor.datapower.ibm.com/apis-minimum-gw   Running                false          false              25m

NAME                                                        READY   STATUS    VERSION    RECONCILED VERSION   AGE
gatewaycluster.gateway.apiconnect.ibm.com/apis-minimum-gw   n/n     Running   10.0.5.8   10.0.5.8-1281        27m

NAME                                                                READY   STATUS    VERSION    RECONCILED VERSION   AGE
managementcluster.management.apiconnect.ibm.com/apis-minimum-mgmt   8/8     Running   10.0.5.8   10.0.5.8-1281        2d20h

NAME                                                       READY   STATUS    VERSION    RECONCILED VERSION   AGE
portalcluster.portal.apiconnect.ibm.com/apis-minimum-ptl   n/n     Running   10.0.5.8   10.0.5.8-1281        27m
Note: The warm-standby site has fewer managementcluster pods than the active site.
If the management database replication between sites fails for any reason other than because an install or upgrade is in progress, the haStatus output shows:
NAME         READY   STATUS    VERSION      RECONCILED VERSION   MESSAGE                                                                          AGE
management   n/n     Warning   10.0.5.8-0   10.0.5.8-0           Management is ready. HA Status Warning - see HAStatus in CR for details   8m59s

status:
  haStatus
    {
      "lastTransitionTime": "2024-08-20T19:47:08Z",
      "message": "Replication not working",
      "reason": "na",
      "status": "True",
      "type": "Warning"
   }
If the warning persists, see Troubleshooting a two data center deployment.

You can validate that your portal deployments are synchronizing by running oc get pods on both the active and warm-standby data centers. Confirm that the number and names of the pods all match (the UUIDs in the names might be different on each site), and that all are in the Ready state.

For additional replication verification checks, see Verifying replication between data centers. It is recommended to run a test failover, and confirm that all of the expected data is present and correct on the newly active site. See How to failover API Connect from the active to the warm-standby data center for details.

What to do next

Configure your deployment: Cloud Manager configuration checklist.
Important: It is strongly recommended to complete the disaster recovery preparation on both sites: Disaster recovery on OpenShift and Cloud Pak for Integration. Disaster recovery preparation ensures that you can recover your API Connect installation if both data centers are lost, or if a replication failure occurs. Both sites must use different backup paths for their Management and Portal backups.