Question & Answer
Question
What are the differences between HA on HP and IBM hosts?
Answer
The following information is excerpted from the IBM Netezza System Administrator’s Guide for Release 5.0.4.
In prior releases, the NPS HA solution leveraged Red Hat Cluster Manager as the foundation for managing HA systems. The Linux-HA solution uses different commands to manage the cluster. The following table outlines the common tasks and the commands used in each HA environment.
| Task | Old Command (Cluster Manager) | New Command (Linux-HA) |
| Display cluster status | clustat -i 5 | crm_mon -i5 |
| Relocate NPS service | cluadmin -- service relocate nps | /nzlocal/scripts/heartbeat_admin.sh --migrate |
| Enable the NPS service | cluadmin -- service enable nps | crm_resource -r nps -p target_role -v started |
| Disable the NPS service | cluadmin -- service disable nps | crm_resource -r nps -p target_role -v stopped |
| Start the cluster on each node | service cluster start | service heartbeat start |
| Stop the cluster on each node | service cluster stop | service heartbeat stop |
· All Linux-HA and DRBD logging information is written to /var/log/messages on each host. For more information about the log files, refer to the IBM Netezza System Administrator’s Guide.
· In the new cluster environment, pingd has replaced netchecker (the Network Failure Daemon). pingd is a built-in part of the Linux-HA suite.
Note: You will see “netchecker" entries in /var/log/messages when a reboot, relocate, or failover occurs. You can safely ignore these messages, which will be removed in a future release.
· The cluster manager HA solution also required a storage array (the MSA500) as a quorum disk to hold the shared data. A storage array is not used in the new Linux-HA/DRBD solution, as DRBD automatically mirrors the data in the /nz and /export/home partitions from the primary NPS host to the secondary NPS host.
· The HA models of the NPS 10000-series SL/XL systems (that is, 10200, 10400, 10600, and 10800) all have a Storage Pad array as part of the system components. The array is not used for any HA operations, but instead serves as a local SAN for common NPS tasks such as load staging, backups, and as a general mounted filesystem for permitted users. This document describes some aspects of the Storage Pad array because the array has services that are monitored by the HA solution for connectivity and availability.
Note: The /nzdata and /shrres file systems on the MSA500 are deprecated. However, if you are migrating to a SL/XL system, you can recreate these locations on the Storage Pad array if desired. For more information, see the NPS Storage Pad Administrator’s Guide.
· In some customer environments that used the previous cluster manager solution, it was possible to have only the active NPS system running while the secondary was powered off. If problems occurred on HA1, the NPS administrator onsite would power off HA1 and power on HA2. In the new Linux-HA DRBD solution, both NPS HA systems must be operational at all times. DRBD ensures that the data saved on both NPS hosts is synchronized, and when Heartbeat detects problems on HA1, the software automatically fails over to HA2 with no manual intervention.
Historical Number
NZ809077
Was this topic helpful?
Document Information
Modified date:
17 October 2019
UID
swg21571983