Troubleshooting
Problem
This document provides and example on how to create a cluster between two systems or partitions using the command line (green screen) interface.
Resolving The Problem
When creating an environment for high availability, it is commonly necessary to use clustering technology between the systems involved. Systems or partitions in a clustered environment are called nodes. This document describes how to create a 2-node cluster from the command line interface. Follow the same steps if creating a cluster with more nodes; however, be sure to include all nodes, where applicable.
Note: This document assumes all nodes are at the same version/release of operating system. If this is not true with your environment, check with the Rochester Support Center before attempting to configure and start clustering.
1. On each node, ensure that the Internet Daemon (INETD) server is started.
o Use WRKACTJOB SBS(QSYSWRK) to see if the QTOGINTD (INETD Server Job) is active (should be in SELW status).
o If the server job is not active, use STRTCPSVR SERVER(*INETD) to start the INETD server.
2. Determine the IP interface(s) -- either one or two -- to be used for clustering communications. Note: It is IBM's recommendation that the interface(s) not be heavily saturated with other traffic, or it could hinder the ability for clustering to communicate.
o Use CFGTCP, Option 1 (Work with TCP/IP interfaces) to determine which interface(s) to use for cluster communications.
Example: System A has an IP interface of 192.168.30.5. System B has an IP interface of 192.168.30.6.
3. Test basic IP communications between the interfaces (chosen in Step 2 above).
Example (using IP addresses in Step 2 above):
o Use PING RMTSYS('192.168.30.6') LCLINTNETA('192.168.30.5') command to test communications from System A to System B
o Use PING RMTSYS('192.168.30.5') LCLINTNETA('192.168.30.6') command to test communications from System B to System A
4. Create the cluster for the nodes desired on ONE NODE ONLY. Note: The example below has two nodes, each with one IP interface. If more nodes or interfaces are desired, they should be specified here.
CRTCLU CLUSTER(cluster_name) NODE((SYSTEMA ('192.168.30.5')) (SYSTEMB ('192.168.30.6')))
5. Start clustering first on the node where the cluster was created. The example assumes the cluster was created (Step 4 above) on SYSTEMA. Therefore, the cluster is first being started on SYSTEMA.
STRCLUNOD CLUSTER(cluster_name) NODE(SYSTEMA)
6. Start clustering on the remaining node(s) from any node in the cluster.
STRCLUNOD CLUSTER(cluster_name) NODE(SYSTEMB)
7. Check the status of the cluster on all nodes in the cluster (example below):
DSPCLUINF <enter>
Cluster . . . . . . . . . . . . . : CLU730
Consistent information in cluster : Yes <-If set to *NO, cluster may not be fully Active.
Number of cluster nodes . . . . . : 2
Number of device domains . . . . . : 1
Current cluster version . . . . . : 8
Current cluster modification level : 1
Current PowerHA version . . . . . : 3.1
Configuration tuning level . . . . : *NORMAL
Cluster message queue . . . . . . : *NONE
Library . . . . . . . . . . . . : *NONE
Failover wait time . . . . . . . . : *NOWAIT
Failover default action . . . . . : *PROCEED
Cluster Membership List
Node Status ------Interface Addresses------
RCH730A Active 9.5.67.119
RCH730B Active 9.5.65.34
* The "Status" column shows the status of both nodes in the cluster
Historical Number
515055787
Was this topic helpful?
Document Information
Modified date:
11 November 2019
UID
nas8N1013170