Installing API Connect on the warm-standby data center
Follow the installation steps for API Connect described in the Cloud Pak for Integration
documentation, adding the multiSiteHA
configuration to the API Connect
YAML.
Before you begin
oc get certificate -n <namespace> | grep <apic-instance-name>
NAME READY SECRET AGE EXPIRATION
apic-mgmt-replication-client True apic-mgmt-replication-client 57s 2024-08-24T09:45:03Z
apic-ptl-replication-client True apic-ptl-replication-client 45s 2024-08-24T09:45:17Z
[root@api.iaincp4ilarge2.cp.fyre.ibm.com ~]# oc get secrets -n <namespace> | grep <apic-instance-name>
NAME TYPE DATA AGE
apic-ingress-ca kubernetes.io/tls 3 4m6s
apic-mgmt-replication-client kubernetes.io/tls 3 2m40s
apic-ptl-replication-client kubernetes.io/tls 3 2m26s
[root@api.iaincp4ilarge2.cp.fyre.ibm.com ~]# oc get issuer -n <namespace> | grep <apic-instance-name>
NAME READY AGE
apic-ingress-issuer True 4m8s
Where <apic-instance-name>
is the
name you intend to use for your API Connect cluster CR, and <namespace>
is the
namespace you created for API Connect.Ensure that your network is redirecting the custom hostname to the Platform UI in your warm-standby data center.
About this task
<apic-instance-name>
with the name you intend to use for your API Connect
Cluster CR. As decided in Planning and initial preparation. Set
<active data center ingress domain>
and <warm-standby data center ingress
domain>
to their appropriate values, which can be determined by running this command in
each data
center:oc get ingresses.config/cluster -o jsonpath={.spec.domain}
Procedure
Results
Warning
, and the
haStatus
reports
pending
:oc get mgmt -n <namespace>
NAME READY STATUS VERSION RECONCILED VERSION MESSAGE AGE
management n/n Warning 10.0.5.8-0 10.0.5.8-0 Management is ready. HA Status Warning - see HAStatus in CR for details 8m59s
status:
haStatus
{
"lastTransitionTime": "2024-08-20T19:47:08Z",
"message": "Replication not working, install or upgrade in progress.",
"reason": "na",
"status": "True",
"type": "Pending"
}
When the management database replication between sites is
complete, the management status reports Running
, and
status.haStatus
reports
Ready
:NAME READY STATUS VERSION RECONCILED VERSION MESSAGE AGE
management n/n Running 10.0.5.8-0 10.0.5.8-0 Management is ready. 8m59s
status:
haStatus
{
"lastTransitionTime": "2024-08-20T19:47:08Z",
"message": "Replication is working",
"reason": "na",
"status": "True",
"type": "Ready"
}
Running
state:oc get all -n <namespace>
...
NAME READY STATUS VERSION RECONCILED VERSION AGE
analyticscluster.analytics.apiconnect.ibm.com/apis-minimum-a7s n/n Running 10.0.5.8 10.0.5.8-1281 27m
NAME READY STATUS VERSION RECONCILED VERSION AGE
apiconnectcluster.apiconnect.ibm.com/apis-minimum n/n Ready 10.0.5.8 10.0.5.8-1281 2d20h
NAME PHASE READY SUMMARY VERSION AGE
datapowerservice.datapower.ibm.com/apis-minimum-gw Running True StatefulSet replicas ready: 1/1 10.5.0.12 25m
NAME PHASE LAST EVENT WORK PENDING WORK IN-PROGRESS AGE
datapowermonitor.datapower.ibm.com/apis-minimum-gw Running false false 25m
NAME READY STATUS VERSION RECONCILED VERSION AGE
gatewaycluster.gateway.apiconnect.ibm.com/apis-minimum-gw n/n Running 10.0.5.8 10.0.5.8-1281 27m
NAME READY STATUS VERSION RECONCILED VERSION AGE
managementcluster.management.apiconnect.ibm.com/apis-minimum-mgmt 8/8 Running 10.0.5.8 10.0.5.8-1281 2d20h
NAME READY STATUS VERSION RECONCILED VERSION AGE
portalcluster.portal.apiconnect.ibm.com/apis-minimum-ptl n/n Running 10.0.5.8 10.0.5.8-1281 27m
managementcluster
pods than the active site.haStatus
output
shows:NAME READY STATUS VERSION RECONCILED VERSION MESSAGE AGE
management n/n Warning 10.0.5.8-0 10.0.5.8-0 Management is ready. HA Status Warning - see HAStatus in CR for details 8m59s
status:
haStatus
{
"lastTransitionTime": "2024-08-20T19:47:08Z",
"message": "Replication not working",
"reason": "na",
"status": "True",
"type": "Warning"
}
If the warning persists, see Troubleshooting a two data center deployment.You can validate that your portal deployments are
synchronizing by running oc get pods
on both the active and warm-standby data centers. Confirm
that the number and names of the pods all match (the UUIDs in the names might be different on each
site), and that all are in the Ready
state.
For additional replication verification checks, see Verifying replication between data centers. It is recommended to run a test failover, and confirm that all of the expected data is present and correct on the newly active site. See How to failover API Connect from the active to the warm-standby data center for details.