IBM Support

Cluster: How To Create a Cluster Between Two Systems/Partitions - Example

Troubleshooting


Problem

This document provides and example on how to create a cluster between two systems or partitions using the command line (green screen) interface.

Resolving The Problem

When creating an environment for high availability, it is commonly necessary to use clustering technology between the systems involved. Systems or partitions in a clustered environment are called nodes. This document describes how to create a 2-node cluster from the command line interface. Follow the same steps if creating a cluster with more nodes; however, be sure to include all nodes, where applicable.

Note: This document assumes all nodes are at the same version/release of operating system. If this is not true with your environment, check with the IBMi Global Support Center before attempting to configure and start clustering.

1. On each node, ensure that the Internet Daemon (INETD) server is started.

o Use WRKACTJOB SBS(QSYSWRK) to see if the QTOGINTD (INETD Server Job) is active (should be in SELW status).
o If the server job is not active, use STRTCPSVR SERVER(*INETD) to start the INETD server.

2. Determine the IP interface(s) -- either one or two -- to be used for clustering communications. Note: It is IBM's recommendation that the interface(s) not be heavily saturated with other traffic, or it could hinder the ability for clustering to communicate.

o Use CFGTCP, Option 1 (Work with TCP/IP interfaces) to determine which interface(s) to use for cluster communications.

Example: System A has an IP interface of 192.168.30.5. System B has an IP interface of 192.168.30.6.

3. Test basic IP communications between the interfaces (chosen in Step 2 above).

Example (using IP addresses in Step 2 above):

o Use PING RMTSYS('192.168.30.6') LCLINTNETA('192.168.30.5') command to test communications from System A to System B
o Use PING RMTSYS('192.168.30.5') LCLINTNETA('192.168.30.6') command to test communications from System B to System A

4. Create the cluster for the nodes desired on ONE NODE ONLY. Note: The example below has two nodes, each with one IP interface. If more nodes or interfaces are desired, they should be specified here.

CRTCLU CLUSTER(cluster_name) NODE((SYSTEMA ('192.168.30.5')) (SYSTEMB ('192.168.30.6')))

Note that there are default parameters START(*YES) and DEVDMN(*GEN) that will automatically start the nodes and add them into the same device domain.

 

5. Check the status of the cluster on all nodes in the cluster (example below):

DSPCLUINF <enter>

PowerHA                  Display Cluster Information
                                                   
Cluster  . . . . . . . . . . . . . :   CLU740      
Consistent information in cluster  :   Yes          <-If this value is *NO, the cluster may not be fully active
                                                   
Current PowerHA version  . . . . . :   4.14.3       
Current cluster version  . . . . . :   9.9          
                                                   
Cluster message queue  . . . . . . :   *NONE        
 Library  . . . . . . . . . . . . :     *NONE      
Failover wait time . . . . . . . . :   *NOWAIT      
Failover default action  . . . . . :   *PROCEED     
 

Pressing Enter 1 time will bring up the Cluster Membership List:

                           Cluster Membership List         
                                                          
Node         Status         ------Interface Addresses------
P401A3       Active         9.5.161.233                    
P401A4       Active         9.5.161.238                    

* The "Status" column shows the status of both nodes in the cluster

[{"Type":"MASTER","Line of Business":{"code":"LOB68","label":"Power HW"},"Business Unit":{"code":"BU070","label":"IBM Infrastructure"},"Product":{"code":"SWG60","label":"IBM i"},"ARM Category":[{"code":"a8m3p000000F8x5AAC","label":"High Availability-\u003ECluster"}],"ARM Case Number":"","Platform":[{"code":"PF012","label":"IBM i"}],"Version":"6.1.0;7.1.0;7.2.0;7.3.0;7.4.0;7.5.0;7.6.0"}]

Historical Number

515055787

Document Information

Modified date:
03 April 2026

UID

nas8N1013170