IBM Support

Contrasting the High Availability solution on two types of IBM Netezza hosts

Question & Answer


Question

What are the differences between HA on HP and IBM hosts?

Answer

The following information is excerpted from the IBM Netezza System Administrator’s Guide for Release 5.0.4.

In prior releases, the NPS HA solution leveraged Red Hat Cluster Manager as the foundation for managing HA systems. The Linux-HA solution uses different commands to manage the cluster. The following table outlines the common tasks and the commands used in each HA environment.

TaskOld Command (Cluster Manager)New Command (Linux-HA)
Display cluster statusclustat -i 5crm_mon -i5
Relocate NPS servicecluadmin -- service relocate nps/nzlocal/scripts/heartbeat_admin.sh --migrate
Enable the NPS servicecluadmin -- service enable npscrm_resource -r nps -p target_role -v started
Disable the NPS servicecluadmin -- service disable npscrm_resource -r nps -p target_role -v stopped
Start the cluster on each nodeservice cluster startservice heartbeat start
Stop the cluster on each nodeservice cluster stopservice heartbeat stop
Some additional points of differences between the solutions:

·         All Linux-HA and DRBD logging information is written to /var/log/messages on each host. For more information about the log files, refer to the IBM Netezza System Administrator’s Guide.


·         In the new cluster environment, pingd has replaced netchecker (the Network Failure Daemon).  pingd is a built-in part of the Linux-HA suite.

Note: You will see “netchecker" entries in /var/log/messages when a reboot, relocate, or failover occurs. You can safely ignore these messages, which will be removed in a future release.
·         The cluster manager HA solution also required a storage array (the MSA500) as a quorum disk to hold the shared data. A storage array is not used in the new Linux-HA/DRBD solution, as DRBD automatically mirrors the data in the /nz and /export/home partitions from the primary NPS host to the secondary NPS host.
·        The HA models of the NPS 10000-series SL/XL systems (that is, 10200, 10400, 10600, and 10800) all have a Storage Pad array as part of the system components. The array is not used for any HA operations, but instead serves as a local SAN for common NPS tasks such as load staging, backups, and as a general mounted filesystem for permitted users. This document describes some aspects of the Storage Pad array because the array has services that are monitored by the HA solution for connectivity and availability.

Note: The /nzdata and /shrres file systems on the MSA500 are deprecated. However, if you are migrating to a SL/XL system, you can recreate these locations on the Storage Pad array if desired. For more information, see the NPS Storage Pad Administrator’s Guide.
·         In some customer environments that used the previous cluster manager solution, it was possible to have only the active NPS system running while the secondary was powered off. If problems occurred on HA1, the NPS administrator onsite would power off HA1 and power on HA2. In the new Linux-HA DRBD solution, both NPS HA systems must be operational at all times. DRBD ensures that the data saved on both NPS hosts is synchronized, and when Heartbeat detects problems on HA1, the software automatically fails over to HA2 with no manual intervention.

[{"Product":{"code":"SSULQD","label":"IBM PureData System"},"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Component":null,"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"1.0.0","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Historical Number

NZ809077

Document Information

Modified date:
17 October 2019

UID

swg21571983