Appliance nodes

IBM® Integrated Analytics System runs with multiple nodes. The node hosts the database, GPFS file systems, management components, and other components.

All nodes in a multinode cluster have the same hardware and capacity. One node is selected to be the master node. The catalog MLN always runs on the master node.

In some situations a node may need to be put out of the service, so that it is not used to run the appliance application (Db2 Warehouse) and can be serviced without impacting the rest of the appliance. If a node becomes non-operational, the workload and MLNs are redistributed across the remaining nodes. Some multi-rack IAS appliances include spare nodes which can be used for failover in HA domains.

If some service work is required, you can manually disable and then enable a node using CLI, or the web console. However, physical removal of a node must only be performed by IBM Support.

Node failure

In a single-rack appliance, if a node fails, MLNs are redistributed amongst surviving nodes to bring the system back online. Redistribution of MLNs is such that load is shared fairly amongst surviving nodes. Performance degradation is expected after a failover, because a larger number of MLNs are being run on fewer nodes with fewer resources available.

In a multi-rack appliances, there are multiple HA-domains which include spare nodes. In case of a node failure, MLNs (Db2 data partitions) from the failed node are distributed to the spare node within this HA-domain.

To handle data that was not written to the data file system at a failure time, and to bring Db2 back to a consistent state, crash recovery is performed. The system experiences a brief outage while failover is performed.

The number of failures the system tolerates depends on the number of nodes in the initial configuration, and whether there is sufficient time to recover between successive failures. For more information on high availability in IAS, see High availability.