Setting up high availability disaster recovery in a hybrid deployment

Learn how to set up high availability disaster recovery (HADR) in a hybrid deployment.

About this task

Signed certificates from the WebGUI instances are loaded into the HTTP load balancer. These certificates are also loaded from the HTTP load balancer to the configmap. The configmap needs the certificates from the HTTP Load Balancer because the connection from the cluster to the common UI goes through the HTTP Load Balancer.

Procedure

Install IBM Netcool Operations Insight with multiple WebGUI instances

  1. Prepare for your HADR hybrid deployment. For more information, see the Planning section of the hybrid installation topics.

    Install all on-premises components, including the Netcool/OMNIbus ObjectServer and WebGUI (which can be deployed on a single node).

    Install two separate sets of the WebGUI, DASH, and Netcool/OMNIbus components. Install one primary set and one backup set across two or more different hosts. For more information, see the Installing on-premises section of the on-premises installation topics.

Provision two HAproxy hosts

  1. Provision two HAproxy hosts, for example, proxy.east.example.com and proxy.west.example.com. For more information, see HAproxy configuration.
    1. Deploy podman on each proxy host.

      To install Podman, see the Podman Installation Instructions external link.

      You will use podman to run the HAproxy later in step 23.

      You can also run the yum install haproxy command. Running the command installs a version of HAproxy on the proxy host and the haproxy.cfg file is added to the /etc/haproxy directory. Run the command on each proxy host. The HAproxy configuration file is usually called /usr/local/etc/haproxy/haproxy.cfg and is usually in a directory similar to /usr/local/etc/haproxy.

    2. If not already created on each proxy host, create a /etc/haproxy directory.
    3. If not already created, manually create a haproxy.cfg file in the /etc/haproxy directory on each host. You will update the files later in step 23.

Configure the ObjectServer database for load balancing

  1. You can configure the internal ObjectServer database for load balancing and high availability. If Db2® is already configured, you can use Db2 for load balancing and high availability. For more information, see Configuring load balancing with the ObjectServer. The ObjectServer database is used to configure OAuth persistence. You can set up the ObjectServer in a multitier configuration. If the ObjectServer database is used, Db2 is no longer needed.

Set up Web GUI load balancing

  1. Set up WebGUI for load balancing. For more information, see Configuring a Web GUI load balancing environment. For now, do not set up the HTTP Load Balancer. For more information about setting up the HTTP Load Balancer, see step 15.

Set up signed WebSphere Application Server certificates

  1. Set up certificate authority (CA) signed certificates for WebGUI load balancing. For more information, see Creating signed WebSphere Application Server certificates.

Set up OAuth persistence

  1. Set up OAuth persistence for your hybrid clusters by using the ObjectServer tables. Complete the following steps. These steps are also described in steps 1 to 5 in the Enabling the persistent OAuth 2.0 service in WebGUI in a high availability disaster recovery hybrid deployment topic.
    1. Create a new $NCHOME/omnibus/etc/web_oauth.sql file and paste the following content:
      CREATE database OAuthDBSchema;
      go
      
      CREATE TABLE OAuthDBSchema.OAUTH20CACHE persistent
      (
      LOOKUPKEY VARCHAR(256) PRIMARY KEY, 
      UNIQUEID VARCHAR(128) NODEFAULT, 
      COMPONENTID VARCHAR(256) NODEFAULT, 
      TYPE VARCHAR(64) NODEFAULT, 
      SUBTYPE VARCHAR(64), 
      CREATEDAT UNSIGNED64, 
      LIFETIME INT, 
      EXPIRES UNSIGNED64, 
      TOKENSTRING VARCHAR(2048) NODEFAULT, 
      CLIENTID VARCHAR(64) NODEFAULT, 
      USERNAME VARCHAR(64) NODEFAULT, 
      SCOPE VARCHAR(512) NODEFAULT, 
      REDIRECTURI VARCHAR(2048), 
      STATEID VARCHAR(64) NODEFAULT
      );
      go
      
      CREATE TABLE OAuthDBSchema.OAUTH20CLIENTCONFIG persistent
      (
      COMPONENTID VARCHAR(256) PRIMARY KEY, 
      CLIENTID VARCHAR(256) PRIMARY KEY, 
      CLIENTSECRET VARCHAR(256), 
      DISPLAYNAME VARCHAR(256) NODEFAULT, 
      REDIRECTURI VARCHAR(2048), 
      ENABLED INT
      );
      go
    2. Add the table to the primary (AGG_P) ObjectServer, by running the following command:
      $NCHOME/omnibus/bin/nco_sql -user root -password <password> -server AGG_P -input $NCHOME/omnibus/etc/web_oauth.sql
    3. Add the table to the backup (AGG_B) ObjectServer, by running the following command:
      $NCHOME/omnibus/bin/nco_sql -user root -password <password> -server AGG_B -input $NCHOME/omnibus/etc/web_oauth.sql
    4. Add the following section to the $NCHOME/omnibus/etc/AGG_GATE.map file:
      ###############################################################################
      #
      # WebGUI & WAS OAUTH20 Persistence Service on ObjectServer
      #
      ###############################################################################
      CREATE MAPPING Oauth20CacheMap
      (
      'LOOKUPKEY' = '@LOOKUPKEY' ON INSERT ONLY,
      'UNIQUEID' = '@UNIQUEID',
      'COMPONENTID' = '@COMPONENTID',
      'TYPE' = '@TYPE',
      'SUBTYPE' = '@SUBTYPE',
      'CREATEDAT' = '@CREATEDAT',
      'LIFETIME' = '@LIFETIME',
      'EXPIRES' = '@EXPIRES',
      'TOKENSTRING' = '@TOKENSTRING',
      'CLIENTID' = '@CLIENTID',
      'USERNAME' = '@USERNAME',
      'SCOPE' = '@SCOPE',
      'REDIRECTURI' = '@REDIRECTURI',
      'STATEID' = '@STATEID'
      );CREATE MAPPING Oauth20ClientConfig
      (
      'COMPONENTID' = '@COMPONENTID' ON INSERT ONLY,
      'CLIENTID' = '@CLIENTID' ON INSERT ONLY,
      'CLIENTSECRET' = '@CLIENTSECRET',
      'DISPLAYNAME' = '@DISPLAYNAME',
      'REDIRECTURI' = '@REDIRECTURI',
      'ENABLED' = '@ENABLED'
      );
    5. Add the following section to the $NCHOME/omnibus/etc/AGG_GATE.tblrep.def file:
      ###############################################################################
      #
      # WebGUI & WAS OAUTH20 Persistence on ObjectServer
      #
      ###############################################################################
      REPLICATE ALL FROM TABLE 'OAuthDBSchema.OAUTH20CACHE'
      USING map 'Oauth20CacheMap';
      
      REPLICATE ALL FROM TABLE 'OAuthDBSchema.OAUTH20CLIENTCONFIG'
      USING map 'Oauth20ClientConfig';
    6. Restart the Netcool/OMNIbus gateway.
    7. Create a WebSphere Application Server data source for OAUTH persistence. Log on to the WebSphere Application admin console.
    8. Create an authentication alias. Click WAS Admin Console > Security > Global Security > Java Authentication and Authorization Service > J2C authentication data. Click New, provide an alias name, provide your ObjectServer credentials, and click Apply.
    9. Click WAS Admin Console > Resources > JDBC > Data source > New....
    10. Set the data source name to OAuthProvider.
    11. Set the JNDI name to jdbc/oauthProvider
    12. Click Next.
    13. Create a new JDBC provider.
    14. Click Next.
    15. Select Sybase for the database type.
    16. Select Sybase JDBC 3 Driver for the provider type.
    17. Select Connection pool data source for the implementation type.
    18. Set the name to Sybase JDBC 3 Driver OAuth.
    19. Click Next.
    20. Set the class path to ${SYBASE_JDBC_DRIVER_PATH}/jconn3.jar.
    21. Provide the path to the JAZZSM_HOME/lib/OServer directory, for example /opt/IBM/JazzSM/lib/OServer.
    22. Click Next.
    23. Provide the port number of the ObjectServer.
    24. Provide the ObjectServer server host name.
    25. Set the database name to OAuthDBSchema.
    26. Click Next.
    27. Select the authentication alias, which was created earlier as part of the HA on ObjectServer prerequisite for component-managed authentication alias.
    28. Click Next.
    29. Click Finish and Save link.

Configure single sign-on

  1. Configure single sign-on (SSO) between both DASH instances in WebSphere Application Server. Exchange Lightweight Third Party Authentication (LTPA) tokens in WebSphere Application Server and set up SSO. For more information, see Supporting procedures for single sign-on external icon in the Netcool/OMNIbus documentation.
    1. Export tokens from the primary DASH node in WebSphere Application Server. On the primary DASH node, go to Security > Global security > LTPA. Add a path and password and select Export keys.
    2. Import LTPA keys to the backup DASH node. Secure copy (SCP) the key, which was exported from the primary node, and go to Security > Global security > LTPA. Add the same password that you used for the primary node. Add the path to the copied key. Select Import keys.
    3. Set up SSO in the WebSphere Application Server console on both DASH instances. Go to Security > Global security > Web and SIP security > Single sign-on (SSO). Add the correct domain name, for example, .test.xyz.com.
    4. To confirm that SSO is enabled, connect to one of the WebGUI servers with the fully-qualified host name and log in normally on your browser. Then, open a new browser tab and connect to the other WebGUI server with the fully-qualified host name, as in the following example. You should be automatically authenticated without needing to provide login credentials.

Install Netcool Operations Insight deployments on different Red Hat OpenShift Container Platform clusters

  1. Install cloud native components on Red Hat OpenShift Container Platform. Two or more hybrid clusters can be located across different geographical sites.

Create secrets for access to cloud native and on-premises components

  1. Create secrets for each cluster and ensure that the secrets match for each cluster. Ensure that the secrets match the entries in the OAuth table. For more information about secret creation and authentication, see Configuring authentication. Create the following secrets on each cluster.
    On the primary cluster, run the following command.
    oc create secret generic primary-was-oauth-cnea-secrets \
        	--from-literal=proxy.east.example.com.id=proxyeast \
        	--from-literal=proxy.east.example.com.secret=secreteast \
        	--from-literal=proxy.west.example.com.id=proxywest \
        	--from-literal=proxy.west.example.com.secret=secretwest \
               --from-literal=client-id=cnea-id \
        	--from-literal=client-secret=cnea-secret
    
    On the backup cluster, run the following command.
    oc create secret generic backup-was-oauth-cnea-secrets \
        	--from-literal=proxy.east.example.com.id=proxyeast \
        	--from-literal=proxy.east.example.com.secret=secreteast \
        	--from-literal=proxy.west.example.com.id=proxywest \
        	--from-literal=proxy.west.example.com.secret=secretwest \
        	--from-literal=client-id=cnea-id2 \
        	--from-literal=client-secret=cnea-secret2
    Note: Replace example.com with the domain name for your environment.

Define the primary and backup clusters

  1. After Netcool Operations Insight on OpenShift is deployed, ensure that you defined which cluster runs as the primary cluster and which cluster runs as the backup cluster.
    Primary cluster:
    oc set env deploy/<ReleaseName>-ibm-hdm-common-ui-uiserver DR__DEPLOYMENT__TYPE=primary
    Backup cluster:
    oc set env deploy/<ReleaseName>-ibm-hdm-common-ui-uiserver DR__DEPLOYMENT__TYPE=backup
    Ensure that the managedByUser parameter under the labels section on both clusters is set in the common-ui deployment.
    oc edit deployment <release_name>-ibm-hdm-common-ui-uiserver
    Add the following parameter under the labels section.
      labels:
        metadata.labels.managedByUser: "true"
    Ensure that the helmValuesNOI.healthcron.enabled setting on both deployments is set to false in the noihybrid custom resource (CR) YAML file.
    helmValuesNOI.healthcron.enabled: "false"

Disable some features

  1. Where Cassandra replication is not available, disable some features on the backup cluster to ensure that the status on the primary is reflected on the backup.
    On the backup system, rsh into the trainer pod by running the following command.
    oc rsh deployment/<release_name>-ibm-hdm-analytics-dev-trainer
    In the command prompt for the trainer pod, run the following curl commands.
    curl  --data
          '{"retrainingIntervalMinutes":1440,"enabled":false}' -X POST  --header 'Content-Type:
          application/json' --header 'Accept:  application/json' --header 'X-TenantID:
          cfd95b7e-3bc7-4006-a4a8-a73a79c71255'
          http://localhost:8080/1.0/training/analytics/related-events/schedule
    curl  --data '{"retrainingIntervalMinutes":1440,"enabled":false}' -X
          POST  --header 'Content-Type: application/json' --header 'Accept:  application/json' --header
          'X-TenantID:  cfd95b7e-3bc7-4006-a4a8-a73a79c71255'
          http://localhost:8080/1.0/training/analytics/seasonal-events/schedule

Deploy the hybrid integration kit on each DASH instance

  1. Deploy the hybrid integration kit on each DASH instance. For more information, see Installing the integration kit.
    During the hybrid kit deployment, ensure that the Installation Manager repository and redirect URL both point to the primary hybrid cluster. In each case, use the primary client-id and client secret.

Update aggregation layer gateway and OAuth settings

  1. After the hybrid integration kit is deployed on each DASH instance, complete the following tasks on each cluster.
    1. Update the aggregation layer gateway. For more information, see Update gateway settings.
    2. Update the OAuth settings on the DASH instance. Complete the steps from step 6, Update the OAuth settings, onwards in the Enabling the persistent OAuth 2.0 service in WebGUI in a high availability disaster recovery hybrid deployment topic.

Update the NetcoolOAuthProvider.xml file

  1. Update the NetcoolOAuthProvider.xml file on each DASH host. Provide extra values for the client-ids on each DASH host.
    <parameter name="oauth20.autoauthorize.clients" type="ws" customizable="true">
                <value>cnea-id</value>
          <value>cnea-id2</value>
                <value>proxyeast</value>
          <value>proxywest</value>
              </parameter>
    
    Note: When the hybrid integration kit is upgraded on the DASH host, repeat steps 26.b13 and 14. The NetcoolOAuthProvider.xml file is overwritten during the upgrade process, so, you must repeat the steps to reconfigure the file. After the NetcoolOAuthProvider.xml file is updated, restart the DASH server.

Set up an HTTP Load Balancer

  1. Deploy an HTTP Load Balancer and configure both DASH instances to run by using the HTTP Load Balancer reference.
    Note: If you already deployed an HTTP Load Balancer, you must regenerate the plugin-cfg.xml file. For more information, see Generating the plugin-cfg.xml file external icon in the Netcool/OMNIbus documentation.
    1. Download and install the HTTP Load Balancer. For more information, see Downloading the HTTP server external icon in the IBM Tivoli® Netcool/OMNIbus documentation.
    2. Prepare the HTTP Load Balancer for load balancing. For more information, see Preparing the HTTP for load balancing external icon in the IBM Tivoli Netcool/OMNIbus documentation.
    3. Set the clone IDs on each DASH host. For more information, see Setting clone IDs for nodes external icon in the Netcool/OMNIbus documentation.
    4. Generate the plugin-cfg.xml file. For more information, see Generating the plugin-cfg.xml file external icon in the Netcool/OMNIbus documentation.
      Note: This step must be completed after the hybrid integration kit is deployed on each DASH host. Deployment of the hybrid integration kit adds extra lines to the plugin-cfg.xml file.
      Add the following jsessionid to the plugin-cfg.xml file.
      <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/oauth2/\*"/>
    5. Configure SSL between the IBM HTTP Load Balancer plug-in and each node in the cluster. For more information, see Configuring SSL from each node to the IBM HTTP Server external icon in the Netcool/OMNIbus documentation.
    6. Run the following commands to extract the relevant certificates from each DASH instance and import them into the HTTP Load Balancer keystore.
      Retrieve DASH primary instance certificates.
      openssl s_client -showcerts -verify 5 -connect  <dash_primary_instance_DNs>:16311 < /dev/null | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="DASH1_cert"a".pem"; print >out}'
      Retrieve DASH backup instance certificates.
      openssl s_client -showcerts -verify 5 -connect  <dash_backup_instance_dns>:16311 < /dev/null | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="DASH2_cert"a".pem"; print >out}'
      Import root CA certificates from DASH instances into the HTTP keystore.
      /opt/IBM/HTTPServer/bin/gskcmd -cert -add -db /opt/IBM/HTTPServer/conf/plugin-key.kdb -file DASH1_cert1.pem -label DASH1_CA -pw WebAS
      /opt/IBM/HTTPServer/bin/gskcmd -cert -add -db /opt/IBM/HTTPServer/conf/plugin-key.kdb -file DASH2_cert2.pem -label DASH2_CA -pw WebAS
      

      On the WebSphere Application Server, go to the truststore in the console. To upload the HTTP Load Balancer certificates in each WebSphere Application Server, go to the signer certificates section and retrieve from port.

    7. After the certificates and load balancing setup for the HTTP Load Balancer is complete, you can start the HTTP Load Balancer.
      <HTTP_Server_home>/bin/apachectl start
      Access the DASH instances by using the HTTP Load Balancer referenced URL, without the need for a port number, for example, https://<HTTP_Server_Host>/ibm/console.

Set up certificates for the HAproxy hosts

  1. Set up the certificates for both HAproxy hosts by completing the following steps.
    1. Obtain the CA root and CA intermediate certificates for your organization. If these are not available, create self-generated root CA and intermediate certificates.
    2. Create an openssl.cfg file on the primary proxy host (proxy.east.example.com) and add the following contents:
      [req]
      distinguished_name       = req_distinguished_name
      req_extensions           = req_ext
      [req_distinguished_name]
      countryName              = Country name from profile
      countryName_default      = US
      stateOrProvinceName      = State or provice from profile
      localityName             = Locality name from profile
      organizationName         = Organization name
      organizationName_default = IBM
      organizationalUnitName   = Organization unit from profile
      commonName               = Common name from profile
      [req_ext]
      subjectAltName           = @alt_names
      [alt_names]
      DNS.1 = proxy.west.example.com
      DNS.2 = ...
      DNS.3 = ...
      
    3. Run the following command. You are prompted for various values from the profile that you created. The one other value is the Organization name, which you set to IBM.
      openssl req -out proxy.csr -newkey rsa:2048 -nodes -keyout proxy.key -config openssl.cfg -reqexts req_ext
    4. Run the following command to inspect the certificate signing request.
      openssl req -in proxy.csr -noout -text
      Keep the cem.csr and cem.key files in a safe location.
    5. Using the generated CSR file, sign this certificate using the root CA authority certificate. This step generates a server certificate.
    6. (Optional) Run the following commands. The first command converts the downloaded cert.crt DER file to a cem.crt PEM file. The second command describes the certificate. Check that the validity period is correct. Also, check that the X509v3 Subject Alternative Name contains the FQDNs that you require.
      openssl x509 -in cert.crt -inform der -out proxy.crt
      openssl x509 -in proxy.crt -noout -text
      
    7. Using the downloaded root CA and intermediate certificates, run the following commands to convert and check the certificates.
      openssl x509 -in caintermediatecert.der -inform der -out caintermediate.crt
      openssl x509 -in carootcert.der -inform der -out caroot.crt
      openssl x509 -in caintermediate.crt -noout -text
      openssl x509 -in caroot.crt -noout -text
      
    8. Create a PEM file by concatenating the intermediate certificate to the root certificate. Keep this certificate file in a safe location.
    9. Run the following command to add the intermediate certificate to the end of the CEM certificate file.
      cat caintermediate.crt >> proxy.crt
    10. After all the files are generated, add them to each proxy host in a new directory, for example, the /root/certs directory. These files are used to run the haproxy load balancer.

Configure the OAuthDBSchema.OAUTH20CLIENTCONFIG ObjectServer table

  1. Configure the OAuthDBSchema.OAUTH20CLIENTCONFIG ObjectServer table to include each of the client-ids that are listed in the NetcoolOAuthprovider.xml file.
    COMPONENTID CLIENTID CLIENTSECRET DISPLAYNAME REDIRECTURI Enabled
    NetcoolOAuthProvider cnea-id cnea-secret Hdm-client https://<primary_cluster_route_name>/users/api/authprovider/v1/was/return 1
    NetcoolOAuthProvider cnea-id2 cnea-secret2 Hdm-client2 https://<secondary_cluster_route_name>/users/api/authprovider/v1/was/return 1
    NetcoolOAuthProvider proxyeast secreteast Proxyeast-client https://<proxy1_hostname.dns_name>/users/api/authprovider/v1/was/return 1
    NetcoolOAuthProvider proxywest secretwest Proxywest-client https://<proxy2_hostname.dns_name>/users/api/authprovider/v1/was/return 1
    You can complete this step by using the Netcool Administrator tool.
    OAuth table

Configure the noihybrid YAML files with HADR

  1. On the primary and backup clusters, edit the noihybrid deployment YAML file. On each cluster, add the following lines:
    On the primary cluster, add the following lines:
      dash:
        crossRegionUrls:
        - dash: https://dash-east.example.com:16311
          proxy: https://proxy.east.example.com
        - dash: https://dash-west.xyz.com:16311
          proxy: https://proxy.west.example.com
        trustedCAConfigMapName: users-certificates
        url: https://dash-east.example.com
        username: smadmin
      serviceContinuity:
        continuousAnalyticsCorrelation: true
    
    Where:
    • The dash.crossRegionUrls parameter points to the DASH or WebGUI instances. The proxy values point to the HAproxy that is associated with each DASH or WebGUI instance.
    • The dash.url parameter points to the HTTP load balancer.
    On the backup cluster, add the following lines:
      dash:
        crossRegionUrls:
        - dash: https://dash-east.example.com:16311
          proxy: https://proxy.east.example.com
        - dash: https://dash-west.xyz.com:16311
          proxy: https://proxy.west.example.com
        trustedCAConfigMapName: users-certificates
        url: https://dash-east.example.com
        username: smadmin
      serviceContinuity:
        continuousAnalyticsCorrelation: true
      helmValuesNOI:
        ibm-ea-dr-coordinator-service.coordinatorSettings.backupDeploymentSettings.proxyURLs: https://proxy.east.example.com,https://proxy.west.example.com 
        ibm-ea-dr-coordinator-service.coordinatorSettings.storageClassName: rook-cephfs
    
    Optional: On the backup cluster, you can use the trusted certificates for SSL connection to the primary coordinator by adding the following parameters.
    helmValuesNOI:
    ibm-ea-dr-coordinator-service.coordinatorSettings.backupDeploymentSettings.proxyCertificateConfigMap:
            users-certificates
    ibm-ea-dr-coordinator-service.coordinatorSettings.backupDeploymentSettings.proxySSLCheck:
            true
    If you want to ignore the SSL certificate check, set the proxySSLCheck parameter to false.
    Later steps describe the creation of the important user-certificates configmap on each cluster, which is used by both the coordinator service and DASH.

Create a users-certificate configmap for accessing the HTTP Load Balancer

  1. Create a users-certificate configmap, which allows a connection from the cluster to DASH. The root CA and intermediate CA certificates must be added to the users-certificate configmap. Complete this step for each cluster in your HADR hybrid deployment. For more information about creating the configmap, see Creation of configmap for access to on-premises WebSphere Application Server. and Configmap for root certificates of the proxies.

Create coordinator service secrets on the primary and backup deployments

  1. Follow the steps in Coordinator service secret and add the following lines to the noihybrid YAML file:
    dash:
        crossRegionUrls:
      - proxy: https://netcool.east.example.com
        dash: https://webgui.east.example.com
      - proxy: https://netcool.west.example.com
        dash: https://webgui.west.example.com

Set up the coordinator service

  1. Set up the coordinator service. For more information, see Setting up the coordinator service.

    Set the serviceContinuity.continuousAnalyticsCorrelation parameter to true. This step assumes that you have two deployments, one primary and one backup.

    Nominate one of the deployments as the primary deployment by setting the serviceContinuity.isBackupDeployment parameter to false. Nominate the other deployment as the backup deployment by setting the serviceContinuity.isBackupDeployment parameter to true.

    Note: You can apply the changes to an existing environment, or you can configure the settings before the installation of IBM Netcool Operations Insight.

Restart pods

  1. Restart the common-ui and cem-users pods on each Netcool Operations Insight on Red Hat OpenShift cluster.
    1. Find the names of the common-ui pod and cem-users pods with the following command:
      oc get pod | grep common-ui
      oc get pod | grep cem-users
    2. Restart the pods with the following command:
      oc delete pod pod_name
      where pod_name is the name of the pod to be restarted.
    Ensure that the pods restart without error.

Configure the HAproxy on the primary and backup deployments

  1. To configure the HAproxy, update the haproxy.cfg file. Update the haproxy.cfg file for each proxy host to perform the dashboard switching task for failover or failback. For more information, see HAproxy configuration.
    1. Update the haproxy.cfg file to perform the dashboard switching task for failover or failback. Configure the haproxy.cfg file for each proxy host. For more information, see HAproxy configuration.
    2. Start the HAproxy instance. Run the HAproxy to start the load balancing between the primary and backup clusters. The following example uses podman to run the HAproxy. You can also run the HAproxy outside of podman.
      podman run --name <unique_container_name> -p 443:443/tcp -v /root/new_certs:/root/new_certs
              -v /etc/haproxy:/usr/local/etc/haproxy:ro haproxy:2.3
      Where /root/new_certs is the certificates directory, which contains the PEM file for the HAproxy and the CA signer (including the root and intermediate certificates). The podman command also pulls the HAproxy version to use, which in this case is version 2.3. You must have a haproxy.cfg file in the /etc/haproxy directory or the /usr/local/etc/haproxy directory on your host machine.

What to do next

Complete the postinstallation steps in Postinstallation setup and verification.