clmgr command: Quick reference

Use the following information to quickly find the most common syntax and examples for the PowerHA® SystemMirror® clmgr command.

For more detailed information about the clmgr command, see the man page documentation in the clmgr command topic.

Basic usage

Command usage Command syntax
Basic command format clmgr <ACTION> <CLASS> [<OBJECT>] [COMMAND-SPECIFIC INPUTS]
Flexible output from data retrieval commands
Displays <ATTR>="<VALUE>" pairs (default)
clmgr query <CLASS> <OBJECT>
Displays colon–delimited output
clmgr –c query <CLASS> <OBJECT>
Displays custom-delimited output
clmgr –d<C> query <CLASS> <OBJECT>
Displays quasi-XML format
clmgr –x query <CLASS> <OBJECT>
Intention Recognition: Aliases
clmgr add cluster
The add action includes the following aliases: create, make, and mk.
clmgr query node
The query action includes the following aliases: show, list, ls, and get.
Note: You can display available aliases by running the clmgr <ACTION> <CLASS> -h command.
Intention Recognition: Case sensitivity The case sensitivity is ignored for all actions, classes, and input labels. For example, the following command syntax is valid:

clmgr query cluster == clmgr QueRY cLUsteR
clmgr MoVe RESource_GroUp <RG> nODe=<NODE>
Note: The case sensitivity does not apply to labels used within the PowerHA SystemMirror product. For example, you can create a node that is labeled MyNode.
Intention Recognition: Abbreviations You can enter enough letters to be unambiguous when you are typing syntax. The following examples have the exact syntax first, and then followed by the abbreviated syntax. Both of the following commands provide the same results.

clmgr query cluster == clmgr q cl
clmgr add tape SHARED_TAPE_RESOURCE=/dev/rmt0 == clmgr add tape SH=/dev/rmt0
Note: Abbreviations are intended for ease-of-use while you are typing from the command line. Do not use abbreviations in scripts. Abbreviations might change over time, and are not documented.
Log file /var/hacmp/log/clutils.log

Defining basic topology

Command usage Command syntax
Define a cluster with no sites

clmgr add cluster nodes=<NODE1>,<NODE2>
clmgr add repository <DISK_IDENTIFIER>
Define a stretched cluster
Note: Sites are defined, but only one repository disk is required for a stretched cluster because the repository disk is shared by all sites.

clmgr add cluster type=stretched nodes=<NODE1>,<NODE2>,<NODE3>,<NODE4>
clmgr add site <SITENAME> nodes=<NODE1>,<NODE2>
clmgr add site <SITENAME> nodes=<NODE3>,<NODE4>
clmgr add repository <DISK_IDENTIFIER>
Define a linked cluster
Note: Sites are defined and each site has its own repository disk.

clmgr add cluster type=linked nodes=<NODE1>, <NODE2>, <NODE3>,<NODE4>
clmgr add site <SITE1> nodes=<NODE1>,<NODE2>
clmgr add site <SITE2> nodes=<NODE3>,<NODE4>
clmgr add repository <DISK_IDENTIFIER1> site=<SITE1>
clmgr add repository <DISK_IDENTIFIER2> site=<SITE2>
Create the newly defined objects on all the defined nodes clmgr sync cluster

The alias for a cluster is cl.

Note: You must verify and synchronize the cluster after any configuration changes to replicate the change to other nodes in the cluster.

Defining resource groups

Command usage Command syntax
Define a resource group

clmgr add resource_group <RG_NAME> nodes=<NODE1>,<NODE2> \
      applications=<APP1>,<APP2> volume_group=<VG1>,<VG2> \
      service_label=<SERVICE_IP_LABEL> ...

The alias for a resource_group is rg.

Note: A resource group is a set of cluster resources that you configure and manage as a single unit.
Modify a resource group

clmgr modify resource_group <RG_NAME> FILESYSTEM=<PATH> \
      service_label=<SERVICE_IP_LABEL> ...

The alias for a resource_group is rg.

Defining application resources

Command usage Command syntax
Define an application controller
Note: You can use this command to automatically start and stop an application.

clmgr add application_controller STARTSCRIPT=<path_to_start_script> \
      STOPSCRIPT=<path_to_stop_script>

The aliases for an application_controller are ac, app, and appctl.

Note: You must specify the scripts for an application. The scripts must exist on every node the application might run on.
Define an application monitor: Process-based

clmgr add application_monitor <MONITOR> TYPE=Process MODE=longrunning \
      processes=<PROCESS_NAMES> OWNER=<USER_ID> \
      applications=<APPLICATION_CONTROLLER>

The aliases for an application_monitor are am, mon, appmon.

You can use the ps -e command to determine the correct process names to use an application. Do not use the ps -ef command. For example, you can use the ps -e | awk '{print $4}' | sort –u command.

Note: This type of monitoring detects the termination of one or more application processes.
Define an application monitor: Custom

clmgr add application_monitor <MONITOR> TYPE=Custom MODE=longrunning \
      monitormethod=<PATH_TO_SCRIPT> OWNER=<USER_ID> \
      applications=<APPLICATION_CONTROLLER>

The aliases for an application_monitor are am, mon, appmon.

Note: This type of monitoring checks the health of an application by running the specified monitor method file at configurable intervals and checking the monitors exit code. The monitor method file must exist on every node the application might run on.

Creating LVM resources

Command usage Command syntax
Create a volume group

clmgr add volume_group [<VG_NAME>]  nodes=<NODE1>,[NODE2>] \
      disks=<DISK1>,<DISK2> type=scalable

The alias for a volume_group is vg.

Create a logical volume

clmgr add logical_volume [ <LV_NAME> ] volume_group=<VG1> \
      logical_partitions=## type=jfs2 ...

The alias for a logical_volume is lv.

Create a file system: Create logical volume

clmgr add file_system <FS_NAME> volume_group=<VG1> \
      type=enhanced units=### size_per_unit=megabytes ...

The alias for a file_system is fs.

Note: You must specify the size of the file system to create this type of a file system.
Create a file system: Use logical volume

clmgr add file_system <FS_NAME> volume_group=<VG1> \
      type=enhanced units=### size_per_unit=megabytes ...

The alias for a file_system is fs.

Note: You must specify the size of the specific logical volume to create this style of a file system.
Create a mirror pool: All disks
clmgr add mirror_pool <POOL_NAME> volume_group=<VG_NAME>

The aliases for a mirror_pool are mp and pool.

Create a mirror pool: Specified disks

clmgr add mirror_pool <POOL_NAME> volume_group=<VG_NAME> \
      physical_volumes=<DISK1>,<DISK2>,<DISK3>

The aliases for a mirror_pool are mp and pool.

Managing volume groups

Command usage Command syntax
Volume Group: Add a physical volume

clmgr modify volume_group <VG_NAME> add=<DISK>

The alias for a volume_group is vg.

Volume Group: Add a mirror pool

clmgr modify volume_group <VG_NAME> add=<DISK> mirror_pool=<POOL_NAME>

The alias for a volume_group is vg.

Volume Group: Remove a physical volume
clmgr modify volume_group <VG_NAME> remove=<DISK>

The alias for a volume_group is vg.

Managing resource groups

Command usage Command syntax
Move a resource group: New node
clmgr move resource_group <RG_NAME> node=<NODE2>

The alias for a resource_group is rg.

Note: All resources that are managed by the resource group are brought offline on the current node, and brought online on the specified new node.
Move a resource group: New site

clmgr move resource_group <RG_NAME> site=<SITE2>

The alias for a resource_group is rg.

Note: All resources that are managed by the resource group are brought offline on the current node, and brought online on the specified new node.
Start a resource group

clmgr start resource_group <RG_NAME> node=<NODE2>
[PRIMARY=true] [SECONDARY=true]

The alias for a resource_group is rg.

Note: All resources that are managed by the resource group are brought offline on the current node, and brought online on a node within the specified site. If you do not specify the node input, the resource group is brought online on a default node for the current policy.
To bring the resource group simultaneously in ONLINE and ONLINE SECONDARY state in a multi-site cluster environment, you must specify following additional attributes:
PRIMARY=TRUE and SECONDARY=true
To bring the resource group in ONLINE SECONDARY state on a node, run the following command:
clmgr start resource_group <RG_NAME> [node=<NODE2>] 
[SECONDARY=true]
Stop a resource group
clmgr stop resource_group <RG_NAME> [node=<NODE2>] 
[PRIMARY=true] [SECONDARY=true]
Note: Resources that are managed by the resource group are brought offline on the current node.
To bring the resource group simultaneously on both sites from ONLINE and ONLINE SECONDARY state to OFFLINE state, you must specify following additional attributes:
PRIMARY=TRUE and SECONDARY=true
To bring down the resource group from ONLINE SECONDARY to OFFLINE state on a node, run the following command:
clmgr stop resource_group <RG_NAME> [node=<NODE2>]
[SECONDARY=true]
Suspend application monitoring
clmgr manage application controller suspend <APP>
Note: This command suspends application monitoring for the specified application. You can specify ALL instead of an application controller to suspend all application monitoring.
Resume application monitoring
clmgr manage application_controller resume <APP>
Note: This command resumes application monitoring for the specified application. You can specify ALL instead of an application controller to suspend all application monitoring.
Move service IP
clmgr move service_ip <SERVICE_LABEL> interface=<NEW_INTERFACE>
Note: The <NEW_INTERFACE> variable refers to a logical interface. For example, en3.

Cluster services

Command usage Command syntax
Start cluster services: Entire cluster
clmgr start cluster
Note: All resources that are managed by the cluster are brought online unless the Manage Resouce Group option in SMIT is set to Manually.
Start cluster services: Site
clmgr start site <SITE_NAME>
Note: All resources that are managed by the nodes within the site are brought online, unless the current policy setting forbids it or in the SMIT interface the System Management (C-SPOC) > PowerHA SystemMirror Services > Start Cluster Services > Manage Resouce Group field is set to Manually.
Start cluster services: Node
clmgr start node <NODE_NAME>
Note: All resources that are managed by the node are brought online, unless current policy setting forbids it or in the SMIT interface the System Management (C-SPOC) > PowerHA SystemMirror Services > Start Cluster Services > Manage Resouce Group field is set to Manually.
Stop cluster services: Entire cluster
clmgr stop cluster
Note: All resources that are managed by the cluster are brought offline. If you want to suspend cluster services without bringing applications and other resources offline, you must set the manage option to unmanage.
Stop cluster services: Site
clmgr stop site <SITE_NAME>
Note: All resources that are managed by the nodes within the site are brought offline, unless the manage option is set to unmanage or move.
Stop cluster services: Node
clmgr stop node <NODE_NAME>
Note: All resources that are managed by the node are brought offline, unless the manage option is set to unmanage or move.