Nodes
A node is a processor that runs both AIX® and the PowerHA® SystemMirror® software.
Nodes might share a set of resources such as, disks, volume groups, file systems, networks, network IP addresses, and applications. The PowerHA SystemMirror software supports up to 16 nodes in a cluster. In a PowerHA SystemMirror cluster, each node is identified by a unique name. In PowerHA SystemMirror, a node name and a hostname can usually be the same. Nodes serve as core physical components of a PowerHA SystemMirror cluster. For more information on nodes and hardware, see the section Nodes. Two types of nodes are defined:
- Server nodes form the core of a PowerHA SystemMirror cluster. Server nodes run services or back end applications that access data on the shared external disks.
- Client nodes run front end applications that retrieve data from the services provided by the server nodes. Client nodes can run PowerHA SystemMirror software to monitor the health of the nodes, and to react to failures.
Server nodes
A cluster server node usually runs an application that accesses data on the shared external disks. Server nodes run PowerHA SystemMirror daemons and keep resources highly available. Typically, applications are run, storage is shared between these nodes, and clients connect to the server nodes through a service IP address.
Client nodes
A full high availability solution typically includes the client machine that uses services provided by the servers. Client nodes can be divided into two categories: naive and intelligent.
- A naive client views the cluster as a single entity. If a server fails, the client must be restarted, or at least must reconnect to the server.
- An intelligent client is cluster-aware. A cluster-aware client reacts appropriately in the face of a server failure, connecting to an alternate server, perhaps masking the failure from the user. Such an intelligent client must have knowledge of the cluster state.
PowerHA SystemMirror extends the cluster paradigm to clients by providing both dynamic cluster configuration reporting and notification of cluster state changes, such as changes in subsystems or node failure.