Setting up high availability disaster recovery in a hybrid deployment
Learn how to set up high availability disaster recovery (HADR) in a hybrid deployment.
About this task
- 1. Install IBM® Netcool® Operations Insight® with multiple WebGUI instances
- 2. Provision two HAproxy hosts
- 3. Configure the ObjectServer database for load balancing
- 4. Set up Web GUI load balancing
- 5. Set up signed WebSphere® Application Server certificates
- 6. Set up OAuth persistence
- 7. Configure single sign-on
- 8. Install Netcool Operations Insight deployments on different Red Hat® OpenShift® Container Platform clusters
- 9. Create secrets for access to cloud native components
- 10. Define the primary and backup clusters
- 11. Disable some features
- 12. Deploy the hybrid integration kit on each DASH instance
- 13. Update aggregation layer gateway and OAuth settings
- 14. Update the NetcoolOAuthProvider.xml file
- 14. Set up an HTTP Load Balancer
- 15. Set up certificates for the HAproxy hosts
- 16. Configure the OAuthDBSchema.OAUTH20CLIENTCONFIG ObjectServer table
- 17. Configure the noihybrid YAML files with HADR
- 18. Create a users-certificate configmap for accessing the HTTP Load Balancer
- 19. Create coordinator service secrets on the primary and backup deployments
- 20. Set up the coordinator service
- 21. Restart pods
- 22. Configure the HAproxy on the primary and backup deployments
Signed certificates from the WebGUI instances are loaded into the HTTP load balancer. These certificates are also loaded from the HTTP load balancer to the configmap. The configmap needs the certificates from the HTTP Load Balancer because the connection from the cluster to the common UI goes through the HTTP Load Balancer.
Procedure
Install IBM Netcool Operations Insight with multiple WebGUI instances
-
Prepare for your HADR hybrid deployment. For more information, see the Planning section of the hybrid installation topics.
Install all on-premises components, including the Netcool/OMNIbus ObjectServer and WebGUI (which can be deployed on a single node).
Install two separate sets of the WebGUI, DASH, and Netcool/OMNIbus components. Install one primary set and one backup set across two or more different hosts. For more information, see the Installing on-premises section of the on-premises installation topics.
Provision two HAproxy hosts
- Provision two HAproxy hosts, for example,
proxy.east.example.com
andproxy.west.example.com
. For more information, see HAproxy configuration.- Deploy podman on each proxy host.
To install Podman, see the Podman Installation Instructions .
You will use podman to run the HAproxy later in step 23.You can also run the yum install haproxy command. Running the command installs a version of HAproxy on the proxy host and the haproxy.cfg file is added to the /etc/haproxy directory. Run the command on each proxy host. The HAproxy configuration file is usually called
/usr/local/etc/haproxy/haproxy.cfg
and is usually in a directory similar to/usr/local/etc/haproxy
. - If not already created on each proxy host, create a /etc/haproxy directory.
- If not already created, manually create a haproxy.cfg file in the /etc/haproxy directory on each host. You will update the files later in step 23.
- Deploy podman on each proxy host.
Configure the ObjectServer database for load balancing
- You can configure the internal ObjectServer database for load balancing and high availability. If Db2® is already configured, you can use Db2 for load balancing and high availability. For more information, see Configuring load balancing with the ObjectServer. The ObjectServer database is used to configure OAuth persistence. You can set up the ObjectServer in a multitier configuration. If the ObjectServer database is used, Db2 is no longer needed.
Set up Web GUI load balancing
- Set up WebGUI for load balancing. For more information, see Configuring a Web GUI load balancing environment. For now, do not set up the HTTP Load Balancer. For more information about setting up the HTTP Load Balancer, see step 15.
Set up signed WebSphere Application Server certificates
- Set up certificate authority (CA) signed certificates for WebGUI load balancing. For more information, see Creating signed WebSphere Application Server certificates.
Set up OAuth persistence
- Set up OAuth persistence for your hybrid clusters by using the ObjectServer tables.
Complete the following steps. These steps are also described in steps 1 to 5 in the Enabling the persistent OAuth 2.0 service in WebGUI in a high availability disaster recovery hybrid deployment topic.
- Create a new $NCHOME/omnibus/etc/web_oauth.sql
file and paste the following content:
CREATE database OAuthDBSchema; go CREATE TABLE OAuthDBSchema.OAUTH20CACHE persistent ( LOOKUPKEY VARCHAR(256) PRIMARY KEY, UNIQUEID VARCHAR(128) NODEFAULT, COMPONENTID VARCHAR(256) NODEFAULT, TYPE VARCHAR(64) NODEFAULT, SUBTYPE VARCHAR(64), CREATEDAT UNSIGNED64, LIFETIME INT, EXPIRES UNSIGNED64, TOKENSTRING VARCHAR(2048) NODEFAULT, CLIENTID VARCHAR(64) NODEFAULT, USERNAME VARCHAR(64) NODEFAULT, SCOPE VARCHAR(512) NODEFAULT, REDIRECTURI VARCHAR(2048), STATEID VARCHAR(64) NODEFAULT ); go CREATE TABLE OAuthDBSchema.OAUTH20CLIENTCONFIG persistent ( COMPONENTID VARCHAR(256) PRIMARY KEY, CLIENTID VARCHAR(256) PRIMARY KEY, CLIENTSECRET VARCHAR(256), DISPLAYNAME VARCHAR(256) NODEFAULT, REDIRECTURI VARCHAR(2048), ENABLED INT ); go
- Add the table to the primary (AGG_P) ObjectServer, by running the following
command:
$NCHOME/omnibus/bin/nco_sql -user root -password <password> -server AGG_P -input $NCHOME/omnibus/etc/web_oauth.sql
- Add the table to the backup (AGG_B) ObjectServer, by running the following
command:
$NCHOME/omnibus/bin/nco_sql -user root -password <password> -server AGG_B -input $NCHOME/omnibus/etc/web_oauth.sql
- Add the following section to the $NCHOME/omnibus/etc/AGG_GATE.map
file:
############################################################################### # # WebGUI & WAS OAUTH20 Persistence Service on ObjectServer # ############################################################################### CREATE MAPPING Oauth20CacheMap ( 'LOOKUPKEY' = '@LOOKUPKEY' ON INSERT ONLY, 'UNIQUEID' = '@UNIQUEID', 'COMPONENTID' = '@COMPONENTID', 'TYPE' = '@TYPE', 'SUBTYPE' = '@SUBTYPE', 'CREATEDAT' = '@CREATEDAT', 'LIFETIME' = '@LIFETIME', 'EXPIRES' = '@EXPIRES', 'TOKENSTRING' = '@TOKENSTRING', 'CLIENTID' = '@CLIENTID', 'USERNAME' = '@USERNAME', 'SCOPE' = '@SCOPE', 'REDIRECTURI' = '@REDIRECTURI', 'STATEID' = '@STATEID' );CREATE MAPPING Oauth20ClientConfig ( 'COMPONENTID' = '@COMPONENTID' ON INSERT ONLY, 'CLIENTID' = '@CLIENTID' ON INSERT ONLY, 'CLIENTSECRET' = '@CLIENTSECRET', 'DISPLAYNAME' = '@DISPLAYNAME', 'REDIRECTURI' = '@REDIRECTURI', 'ENABLED' = '@ENABLED' );
- Add the following section to the $NCHOME/omnibus/etc/AGG_GATE.tblrep.def
file:
############################################################################### # # WebGUI & WAS OAUTH20 Persistence on ObjectServer # ############################################################################### REPLICATE ALL FROM TABLE 'OAuthDBSchema.OAUTH20CACHE' USING map 'Oauth20CacheMap'; REPLICATE ALL FROM TABLE 'OAuthDBSchema.OAUTH20CLIENTCONFIG' USING map 'Oauth20ClientConfig';
- Restart the Netcool/OMNIbus gateway.
- Create a WebSphere Application Server data source for OAUTH persistence. Log on to the WebSphere Application admin console.
- Create an authentication alias. Click WAS Admin Console > Security > Global Security > Java Authentication and Authorization Service > J2C authentication data. Click New, provide an alias name, provide your ObjectServer credentials, and click Apply.
- Click WAS Admin Console > Resources > JDBC > Data source > New....
- Set the data source name to
OAuthProvider
. - Set the JNDI name to
jdbc/oauthProvider
- Click Next.
- Create a new JDBC provider.
- Click Next.
- Select Sybase for the database type.
- Select Sybase JDBC 3 Driver for the provider type.
- Select Connection pool data source for the implementation type.
- Set the name to
Sybase JDBC 3 Driver OAuth
. - Click Next.
- Set the class path to
${SYBASE_JDBC_DRIVER_PATH}/jconn3.jar
. - Provide the path to the JAZZSM_HOME/lib/OServer directory, for example /opt/IBM/JazzSM/lib/OServer.
- Click Next.
- Provide the port number of the ObjectServer.
- Provide the ObjectServer server host name.
- Set the database name to
OAuthDBSchema
. - Click Next.
- Select the authentication alias, which was created earlier as part of the HA on ObjectServer prerequisite for component-managed authentication alias.
- Click Next.
- Click Finish and Save link.
- Create a new $NCHOME/omnibus/etc/web_oauth.sql
file and paste the following content:
Configure single sign-on
- Configure single sign-on (SSO) between both DASH instances in
WebSphere
Application Server. Exchange
Lightweight Third Party Authentication (LTPA) tokens in WebSphere
Application Server and set up SSO. For more
information, see Supporting procedures for single sign-on in the Netcool/OMNIbus
documentation.
- Export tokens from the primary DASH node in WebSphere Application Server. On the primary DASH node, go to Security > Global security > LTPA. Add a path and password and select Export keys.
- Import LTPA keys to the backup DASH node. Secure copy (SCP) the key, which was exported from the primary node, and go to Security > Global security > LTPA. Add the same password that you used for the primary node. Add the path to the copied key. Select Import keys.
- Set up SSO in the WebSphere
Application Server console on both DASH instances. Go to
Security > Global security > Web and
SIP security > Single sign-on (SSO). Add the correct
domain name, for example,
.test.xyz.com
. - To confirm that SSO is enabled, connect to one of the WebGUI servers with the fully-qualified host name and log in normally on your browser. Then, open a new browser tab and connect to the other WebGUI server with the fully-qualified host name, as in the following example. You should be automatically authenticated without needing to provide login credentials.
Install Netcool Operations Insight deployments on different Red Hat OpenShift Container Platform clusters
- Install cloud native components on Red Hat
OpenShift Container Platform. Two or more hybrid
clusters can be located across different geographical sites. Follow these instructions: Installing cloud native components on hybrid.
Create secrets for access to cloud native and on-premises components
- Create secrets for each cluster and ensure that the secrets
match for each cluster. Ensure that the secrets match the entries in the OAuth table. For more
information about secret creation and authentication, see Configuring authentication. Create
the following secrets on each cluster. On the primary cluster, run the following command.
oc create secret generic primary-was-oauth-cnea-secrets \ --from-literal=proxy.east.example.com.id=proxyeast \ --from-literal=proxy.east.example.com.secret=secreteast \ --from-literal=proxy.west.example.com.id=proxywest \ --from-literal=proxy.west.example.com.secret=secretwest \ --from-literal=client-id=cnea-id \ --from-literal=client-secret=cnea-secret
On the backup cluster, run the following command.oc create secret generic backup-was-oauth-cnea-secrets \ --from-literal=proxy.east.example.com.id=proxyeast \ --from-literal=proxy.east.example.com.secret=secreteast \ --from-literal=proxy.west.example.com.id=proxywest \ --from-literal=proxy.west.example.com.secret=secretwest \ --from-literal=client-id=cnea-id2 \ --from-literal=client-secret=cnea-secret2
Note: Replace example.com with the domain name for your environment.
Define the primary and backup clusters
- After Netcool Operations Insight on OpenShift is deployed, ensure
that you defined which cluster runs as the primary cluster and which cluster runs as the backup
cluster. Primary cluster:
oc set env deploy/<ReleaseName>-ibm-hdm-common-ui-uiserver DR__DEPLOYMENT__TYPE=primary
Backup cluster:oc set env deploy/<ReleaseName>-ibm-hdm-common-ui-uiserver DR__DEPLOYMENT__TYPE=backup
Ensure that themanagedByUser
parameter under the labels section on both clusters is set in the common-ui deployment.
Add the following parameter under the labels section.oc edit deployment <release_name>-ibm-hdm-common-ui-uiserver
labels: metadata.labels.managedByUser: "true"
Ensure that thehelmValuesNOI.healthcron.enabled
setting on both deployments is set tofalse
in thenoihybrid
custom resource (CR) YAML file.helmValuesNOI.healthcron.enabled: "false"
Disable some features
- Where Cassandra replication is not available, disable some
features on the backup cluster to ensure that the status on the primary is reflected on the
backup. On the backup system, rsh into the trainer pod by running the following command.
In the command prompt for the trainer pod, run the following curl commands.oc rsh deployment/<release_name>-ibm-hdm-analytics-dev-trainer
curl --data '{"retrainingIntervalMinutes":1440,"enabled":false}' -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'X-TenantID: cfd95b7e-3bc7-4006-a4a8-a73a79c71255' http://localhost:8080/1.0/training/analytics/related-events/schedule
curl --data '{"retrainingIntervalMinutes":1440,"enabled":false}' -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'X-TenantID: cfd95b7e-3bc7-4006-a4a8-a73a79c71255' http://localhost:8080/1.0/training/analytics/seasonal-events/schedule
Deploy the hybrid integration kit on each DASH instance
- Deploy the hybrid integration kit on each DASH instance. For
more information, see Installing the integration kit. During the hybrid kit deployment, ensure that the Installation Manager repository and redirect URL both point to the primary hybrid cluster. In each case, use the primary client-id and client secret.
Update aggregation layer gateway and OAuth settings
- After the hybrid integration kit is deployed on each DASH instance,
complete the following tasks on each cluster.
- Update the aggregation layer gateway. For more information, see Update gateway settings.
- Update the OAuth settings on the DASH instance. Complete the steps from step 6, Update the OAuth settings, onwards in the Enabling the persistent OAuth 2.0 service in WebGUI in a high availability disaster recovery hybrid deployment topic.
Update the NetcoolOAuthProvider.xml file
- Update the NetcoolOAuthProvider.xml file on each DASH host. Provide
extra values for the client-ids on each DASH host.
<parameter name="oauth20.autoauthorize.clients" type="ws" customizable="true"> <value>cnea-id</value> <value>cnea-id2</value> <value>proxyeast</value> <value>proxywest</value> </parameter>
Note: When the hybrid integration kit is upgraded on the DASH host, repeat steps 26.b13 and 14. The NetcoolOAuthProvider.xml file is overwritten during the upgrade process, so, you must repeat the steps to reconfigure the file. After the NetcoolOAuthProvider.xml file is updated, restart the DASH server.
Set up an HTTP Load Balancer
- Deploy an HTTP Load Balancer and configure both DASH instances to run
by using the HTTP Load Balancer reference. Note: If you already deployed an HTTP Load Balancer, you must regenerate the plugin-cfg.xml file. For more information, see Generating the plugin-cfg.xml file in the Netcool/OMNIbus documentation.
- Download and install the HTTP Load Balancer. For more information, see Downloading the HTTP server in the IBM Tivoli® Netcool/OMNIbus documentation.
- Prepare the HTTP Load Balancer for load balancing. For more information, see Preparing the HTTP for load balancing in the IBM Tivoli Netcool/OMNIbus documentation.
- Set the clone IDs on each DASH host. For more information, see Setting clone IDs for nodes in the Netcool/OMNIbus documentation.
- Generate the plugin-cfg.xml file. For more information, see Generating the plugin-cfg.xml file in
the Netcool/OMNIbus
documentation.Note: This step must be completed after the hybrid integration kit is deployed on each DASH host. Deployment of the hybrid integration kit adds extra lines to the plugin-cfg.xml file.Add the following
jsessionid
to the plugin-cfg.xml file.<Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/oauth2/\*"/>
- Configure SSL between the IBM HTTP Load Balancer plug-in and each node in the cluster. For more information, see Configuring SSL from each node to the IBM HTTP Server in the Netcool/OMNIbus documentation.
- Run the following commands to extract the relevant certificates from each DASH instance and
import them into the HTTP Load Balancer keystore.Retrieve DASH primary instance certificates.
openssl s_client -showcerts -verify 5 -connect <dash_primary_instance_DNs>:16311 < /dev/null | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="DASH1_cert"a".pem"; print >out}'
Retrieve DASH backup instance certificates.openssl s_client -showcerts -verify 5 -connect <dash_backup_instance_dns>:16311 < /dev/null | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="DASH2_cert"a".pem"; print >out}'
Import root CA certificates from DASH instances into the HTTP keystore./opt/IBM/HTTPServer/bin/gskcmd -cert -add -db /opt/IBM/HTTPServer/conf/plugin-key.kdb -file DASH1_cert1.pem -label DASH1_CA -pw WebAS /opt/IBM/HTTPServer/bin/gskcmd -cert -add -db /opt/IBM/HTTPServer/conf/plugin-key.kdb -file DASH2_cert2.pem -label DASH2_CA -pw WebAS
On the WebSphere Application Server, go to the truststore in the console. To upload the HTTP Load Balancer certificates in each WebSphere Application Server, go to the signer certificates section and retrieve from port.
- After the certificates and load balancing setup for the HTTP Load Balancer is complete, you can
start the HTTP Load Balancer.
Access the DASH instances by using the HTTP Load Balancer referenced URL, without the need for a port number, for example,<HTTP_Server_home>/bin/apachectl start
https://<HTTP_Server_Host>/ibm/console
.
Set up certificates for the HAproxy hosts
- Set up the certificates for both HAproxy hosts by completing the following steps.
- Obtain the CA root and CA intermediate certificates for your organization. If these are not available, create self-generated root CA and intermediate certificates.
- Create an openssl.cfg file on the primary proxy host
(
proxy.east.example.com
) and add the following contents:[req] distinguished_name = req_distinguished_name req_extensions = req_ext [req_distinguished_name] countryName = Country name from profile countryName_default = US stateOrProvinceName = State or provice from profile localityName = Locality name from profile organizationName = Organization name organizationName_default = IBM organizationalUnitName = Organization unit from profile commonName = Common name from profile [req_ext] subjectAltName = @alt_names [alt_names] DNS.1 = proxy.west.example.com DNS.2 = ... DNS.3 = ...
- Run the following command. You are prompted for various values from the profile that you created. The one other value is the Organization name, which you set to IBM.
openssl req -out proxy.csr -newkey rsa:2048 -nodes -keyout proxy.key -config openssl.cfg -reqexts req_ext
- Run the following command to inspect the certificate signing request.
Keep the cem.csr and cem.key files in a safe location.openssl req -in proxy.csr -noout -text
- Using the generated CSR file, sign this certificate using the root CA authority certificate. This step generates a server certificate.
- (Optional) Run the following commands. The first command converts the downloaded
cert.crt DER file to a cem.crt PEM file. The second
command describes the certificate. Check that the validity period is correct. Also, check that the
X509v3 Subject Alternative Name contains the FQDNs that you
require.
openssl x509 -in cert.crt -inform der -out proxy.crt openssl x509 -in proxy.crt -noout -text
- Using the downloaded root CA and intermediate certificates, run the following commands to
convert and check the
certificates.
openssl x509 -in caintermediatecert.der -inform der -out caintermediate.crt openssl x509 -in carootcert.der -inform der -out caroot.crt openssl x509 -in caintermediate.crt -noout -text openssl x509 -in caroot.crt -noout -text
- Create a PEM file by concatenating the intermediate certificate to the root certificate. Keep this certificate file in a safe location.
- Run the following command to add the intermediate certificate to the end of the CEM certificate
file.
cat caintermediate.crt >> proxy.crt
- After all the files are generated, add them to each proxy host in a new directory, for example, the /root/certs directory. These files are used to run the haproxy load balancer.
Configure the OAuthDBSchema.OAUTH20CLIENTCONFIG ObjectServer table
- Configure the OAuthDBSchema.OAUTH20CLIENTCONFIG ObjectServer table
to include each of the client-ids that are listed in the
NetcoolOAuthprovider.xml file.
You can complete this step by using the Netcool Administrator tool.COMPONENTID CLIENTID CLIENTSECRET DISPLAYNAME REDIRECTURI Enabled NetcoolOAuthProvider cnea-id cnea-secret Hdm-client https://<primary_cluster_route_name>/users/api/authprovider/v1/was/return
1 NetcoolOAuthProvider cnea-id2 cnea-secret2 Hdm-client2 https://<secondary_cluster_route_name>/users/api/authprovider/v1/was/return
1 NetcoolOAuthProvider proxyeast secreteast Proxyeast-client https://<proxy1_hostname.dns_name>/users/api/authprovider/v1/was/return
1 NetcoolOAuthProvider proxywest secretwest Proxywest-client https://<proxy2_hostname.dns_name>/users/api/authprovider/v1/was/return
1
Configure the noihybrid YAML files with HADR
- On the primary and backup clusters, edit the
noihybrid
deployment YAML file. On each cluster, add the following lines:On the primary cluster, add the following lines:
Where:dash: crossRegionUrls: - dash: https://dash-east.example.com:16311 proxy: https://proxy.east.example.com - dash: https://dash-west.xyz.com:16311 proxy: https://proxy.west.example.com trustedCAConfigMapName: users-certificates url: https://dash-east.example.com username: smadmin serviceContinuity: continuousAnalyticsCorrelation: true
- The
dash.crossRegionUrls
parameter points to the DASH or WebGUI instances. The proxy values point to the HAproxy that is associated with each DASH or WebGUI instance. - The
dash.url
parameter points to the HTTP load balancer.
On the backup cluster, add the following lines:dash: crossRegionUrls: - dash: https://dash-east.example.com:16311 proxy: https://proxy.east.example.com - dash: https://dash-west.xyz.com:16311 proxy: https://proxy.west.example.com trustedCAConfigMapName: users-certificates url: https://dash-east.example.com username: smadmin serviceContinuity: continuousAnalyticsCorrelation: true helmValuesNOI: ibm-ea-dr-coordinator-service.coordinatorSettings.backupDeploymentSettings.proxyURLs: https://proxy.east.example.com,https://proxy.west.example.com ibm-ea-dr-coordinator-service.coordinatorSettings.storageClassName: rook-cephfs
Optional: On the backup cluster, you can use the trusted certificates for SSL connection to the primary coordinator by adding the following parameters.helmValuesNOI: ibm-ea-dr-coordinator-service.coordinatorSettings.backupDeploymentSettings.proxyCertificateConfigMap: users-certificates ibm-ea-dr-coordinator-service.coordinatorSettings.backupDeploymentSettings.proxySSLCheck: true
If you want to ignore the SSL certificate check, set the proxySSLCheck parameter tofalse
.Later steps describe the creation of the importantuser-certificates
configmap on each cluster, which is used by both the coordinator service and DASH. - The
Create a users-certificate configmap for accessing the HTTP Load Balancer
- Create a
users-certificate
configmap, which allows a connection from the cluster to DASH. The root CA and intermediate CA certificates must be added to theusers-certificate
configmap. Complete this step for each cluster in your HADR hybrid deployment. For more information about creating the configmap, see Creation of configmap for access to on-premises WebSphere Application Server. and Configmap for root certificates of the proxies.
Create coordinator service secrets on the primary and backup deployments
- Follow the steps in Coordinator service secret and add the following lines to
the
noihybrid
YAML file:dash: crossRegionUrls: - proxy: https://netcool.east.example.com dash: https://webgui.east.example.com - proxy: https://netcool.west.example.com dash: https://webgui.west.example.com
Set up the coordinator service
- Set up the coordinator service. For more information, see Setting up the coordinator service.
Set the
serviceContinuity.continuousAnalyticsCorrelation
parameter totrue
. This step assumes that you have two deployments, one primary and one backup.Nominate one of the deployments as the primary deployment by setting the
serviceContinuity.isBackupDeployment
parameter tofalse
. Nominate the other deployment as the backup deployment by setting theserviceContinuity.isBackupDeployment
parameter totrue
.Note: You can apply the changes to an existing environment, or you can configure the settings before the installation of IBM Netcool Operations Insight.
Restart pods
- Restart the common-ui and cem-users pods on each
Netcool Operations Insight on Red Hat OpenShift
cluster.
- Find the names of the common-ui pod and cem-users pods
with the following command:
oc get pod | grep common-ui oc get pod | grep cem-users
- Restart the pods with the following command:
where pod_name is the name of the pod to be restarted.oc delete pod pod_name
- Find the names of the common-ui pod and cem-users pods
with the following command:
Configure the HAproxy on the primary and backup deployments
- To configure the HAproxy, update the haproxy.cfg file.
Update the haproxy.cfg file for each proxy host to perform the
dashboard switching task for failover or failback. For more
information, see HAproxy configuration.
- Update the haproxy.cfg file to perform the dashboard switching task for failover or failback. Configure the haproxy.cfg file for each proxy host. For more information, see HAproxy configuration.
- Start the HAproxy instance. Run the HAproxy to start the load balancing between the primary and
backup clusters. The following example uses podman to run the HAproxy.
You can also run the HAproxy outside of
podman.
Where /root/new_certs is the certificates directory, which contains the PEM file for the HAproxy and the CA signer (including the root and intermediate certificates). The podman command also pulls the HAproxy version to use, which in this case is version 2.3. You must have a haproxy.cfg file in the /etc/haproxy directory or the /usr/local/etc/haproxy directory on your host machine.podman run --name <unique_container_name> -p 443:443/tcp -v /root/new_certs:/root/new_certs -v /etc/haproxy:/usr/local/etc/haproxy:ro haproxy:2.3