mmaddcallback command

Registers a user-defined command that GPFS™ will execute when certain events occur.

Synopsis

mmaddcallback CallbackIdentifier --command CommandPathname
              --event Event[,Event...] [--priority Value]
              [--async | --sync [--timeout Seconds] [--onerror Action]]
              [-N {Node[,Node...] | NodeFile | NodeClass}]
              [--parms ParameterString ...]

or

mmaddcallback {-S Filename | --spec-file Filename} 

Availability

Available on all IBM Spectrum Scale™ editions.

Description

Use the mmaddcallback command to register a user-defined command that GPFS executes when certain events occur.

The callback mechanism is intended to provide notifications when node and cluster events occur. Invoking complex or long-running commands, or commands that involve GPFS files, may cause unexpected and undesired results, including loss of file system availability. This is particularly true when the --sync option is specified.

Note: For documentation about local events (callbacks) and variables for GPFS Native RAID, see the separate publication IBM Spectrum Scale RAID: Administration.

Parameters

CallbackIdentifier
Specifies a user-defined unique name that identifies the callback. It can be up to 255 characters long. It cannot contain special characters (for example, a colon, semicolon, blank, tab, or comma) and it cannot start with the letters gpfs or mm (which are reserved for GPFS internally defined callbacks).
--command CommandPathname
Specifies the full path name of the executable to run when the event occurs. On Windows, CommandPathname must be a Korn shell script because it will be invoked in the Cygwin ksh environment.

The executable called by the callback facility must be installed on all nodes on which the callback can be triggered. Place the executable in a local file system (not in a GPFS file system) so that it is accessible even when the GPFS file system is unavailable.

--event Event[,Event...]
Specifies a list of events that trigger the callback. The value defines when the callback is invoked. There are two kinds of events: global events and local events. A global event triggers a callback on all nodes in the cluster, such as a Start of changenodeLeaveEnd of change event, which informs all nodes in the cluster that a node has failed. A local event triggers a callback only on the node on which the event occurred, such as mounting a file system on one of the nodes.

Table 1 lists the supported global events and their parameters.

Table 2 lists the supported local events and their parameters.

Local events for GPFS Native RAID are documented in IBM Spectrum Scale RAID: Administration.

--priority Value
Specifies a floating point number that controls the order in which callbacks for a given event are run. Callbacks with a smaller numerical value are run before callbacks with a larger numerical value. Callbacks that do not have an assigned priority are run last. If two callbacks have the same priority, the order in which they are run is undetermined.
--async | --sync [--timeout Seconds] [--onerror Action]
Specifies whether GPFS will wait for the user program to complete and for how long it will wait. The default is --async (GPFS invokes the command asynchronously). --onerror Action specifies one of the following actions that GPFS is to take if the callback command returns a nonzero error code:
continue
GPFS ignores the result from executing the user-provided command. This is the default.
quorumLoss
The node executing the user-provided command will voluntarily resign as, or refrain from taking over as, cluster manager. This action is valid only in conjunction with the tiebreakerCheck event.
shutdown
GPFS will be shut down on the node executing the user-provided command.
-N {Node[,Node...] | NodeFile | NodeClass}
Defines the set of nodes on which the callback is invoked. For global events, the callback is invoked only on the specified set of nodes. For local events, the callback is invoked only if the node on which the event occurred is one of the nodes specified by the -N option. The default is -N all.

For general information on how to specify node names, see Specifying nodes as input to GPFS commands in the IBM Spectrum Scale: Administration and Programming Reference.

This command does not support a NodeClass of mount.

--parms ParameterString ...
Specifies parameters to be passed to the executable specified with the --command parameter. The --parms parameter can be specified multiple times.

When the callback is invoked, the combined parameter string is tokenized on white-space boundaries. Constructs of the form %name and %name.qualifier are assumed to be GPFS variables and are replaced with their appropriate values at the time of the event. If a variable does not have a value in the context of a particular event, the string UNDEFINED is returned instead.

GPFS recognizes the following variables:
%blockLimit
Specifies the current hard quota limit in KB.
%blockQuota
Specifies the current soft quota limit in KB.
%blockUsage
Specifies the current usage in KB for quota-related events.
Start of change%ccrObjectNameEnd of change
Start of changeSpecifies the name of the modified object.End of change
Start of change%ccrObjectValueEnd of change
Start of changeSpecifies the value of the modified object.End of change
Start of change%ccrObjectVersionEnd of change
Start of changeSpecifies the version of the modified object.End of change
%clusterManager[.qualifier]
Specifies the current cluster manager node.
%clusterName
Specifies the name of the cluster where this callback was triggered.
Start of change%ckDataLenEnd of change
Start of changeSpecifies the length of data involved in a checksum mismatch.End of change
Start of change%ckErrorCountClientEnd of change
Start of changeSpecifies the cumulative number of errors for the client side in a checksum mismatch.End of change
Start of change%ckErrorCountNSDEnd of change
Start of changeSpecifies the cumulative number of errors for the NSD side in a checksum mismatch.End of change
Start of change%ckErrorCountServerEnd of change
Start of changeSpecifies the cumulative number of errors for the server side in a checksum mismatch.End of change
Start of change%ckNSDEnd of change
Start of changeSpecifies the NSD involved.End of change
Start of change%ckOtherNodeEnd of change
Start of changeSpecifies the IP address of the other node in an NSD checksum event.End of change
Start of change%ckReasonEnd of change
Start of changeSpecifies the reason string indicating why a checksum mismatch callback was invoked. End of change
Start of change%ckReportingIntervalEnd of change
Start of changeSpecifies the error-reporting interval in effect at the time of a checksum mismatch.End of change
Start of change%ckRoleEnd of change
Start of changeSpecifies the role (client or server) of a GPFS node.End of change
Start of change%ckStartSectorEnd of change
Start of changeSpecifies the starting sector of a checksum mismatch.End of change
Start of change%daNameEnd of change
Start of changeSpecifies the name of the declustered array involved. End of change
Start of change%daRemainingRedundancyEnd of change
Start of changeSpecifies the remaining fault tolerance in a declustered array.End of change
%diskName
Specifies a disk or a comma-separated list of disk names for which this callback is triggered.
%downNodes[.qualifier]
Specifies a comma-separated list of nodes that are currently down. Only nodes local to the given cluster are listed. Nodes which are in a remote cluster but have temporarily joined the cluster are not included.
%eventName
Specifies the name of the event that triggered this callback.
%eventNode[.qualifier]
Specifies a node or comma-separated list of nodes on which this callback is triggered. Note that the list may include nodes which are not local to the given cluster, but have temporarily joined the cluster to mount a file system provided by the local cluster. Those remote nodes could leave the cluster if there is a node failure or if the file systems are unmounted.
%filesLimit
Specifies the current hard quota limit for the number of files.
%filesQuota
Specifies the current soft quota limit for the number of files.
%filesUsage
Specifies the current number of files for quota-related events.
%filesetName
Specifies the name of a fileset for which the callback is being executed.
%filesetSize
Specifies the size of the fileset.
Start of change%fsErrEnd of change
Start of changeSpecifies the file system structure error code.End of change
%fsName
Specifies the file system name for file system events.
Start of change%hardLimitEnd of change
Start of changeSpecifies the hard limit for the block.End of change
%homeServer
Specifies the name of the home server.
Start of change%inodeLimitEnd of change
Start of changeSpecifies the hard limit of the inode.End of change
Start of change%inodeQuotaEnd of change
Start of changeSpecifies the soft limit of the inode.End of change
Start of change%inodeUsageEnd of change
Start of changeSpecifies the total number of files in the fileset.End of change
%myNode[.qualifier]
Specifies the node where callback script is invoked.
Start of change%nodeNameEnd of change
Start of changeSpecifies the node name to which the request is sent.End of change
%nodeNames
Specifies a space-separated list of node names to which the request is sent.
Start of change%pcacheEventEnd of change
Start of changeSpecifies the pcache related events.End of change
Start of change%pdFruEnd of change
Start of changeSpecifies the FRU (field replaceable unit) number of the pdisk.End of change
Start of change%pdLocationEnd of change
Start of changeThe physical location code of a pdisk.End of change
Start of change%pdNameEnd of change
Start of changeThe name of the pdisk involved.End of change
Start of change%pdPathEnd of change
Start of changeThe block device path of the pdisk.End of change
Start of change%pdPriorityEnd of change
Start of changeThe replacement priority of the pdisk.End of change
Start of change%pdStateEnd of change
Start of changeThe state of the pdisk involved.End of change
Start of change%pdWwnEnd of change
Start of changeThe worldwide name of the pdisk.End of change
Start of change%prepopAlreadyCachedFilesEnd of change
Start of changeSpecifies the number of files that are cached. These number of files are not read into cache because data is same between cache and home.End of change
%prepopCompletedReads
Specifies the number of reads executed during a prefetch operation.
%prepopData
Specifies the total data read from the home as part of a prefetch operation.
Start of change%prepopFailedReadsEnd of change
Start of changeSpecifies the number of files for which prefetch failed. Messages are logged to indicate the failure. However, there is no indication about the file names that failed to read.End of change
%quorumNodes[.qualifier]
Specifies a comma-separated list of quorum nodes.
Start of change%quotaEventTypeEnd of change
Start of changeSpecifies either the blockQuotaExceeded event or the inodeQuotaExceeded event. These events are related to soft quota limit being exceeded,End of change
%quotaID
Specifies the numerical ID of the quota owner (UID, GID, or fileset ID).
%quotaOwnerName
Specifies the name of the quota owner (user name, group name, or fileset name).
%quotaType
Specifies the type of quota for quota-related events. Possible values are USR, GRP, or FILESET.
%reason
Specifies the reason for triggering the event. For the preUnmount and unmount events, the possible values are normal and forced. For the preShutdown and shutdown events, the possible values are normal and abnormal. For all other events, the value is UNDEFINED.
%requestType
Specifies the type of request to send to the target nodes.
Start of change%rgCountEnd of change
Start of changeThe number of recovery groups involved.End of change
Start of change%rgErrEnd of change
Start of changeA code from a recovery group, where 0 indicates no error.End of change
Start of change%rgNameEnd of change
Start of changeThe name of the recovery group involved.End of change
Start of change%rgReasonEnd of change
Start of changeThe reason string indicating why a recovery group callback was invoked.End of change
Start of change%senseDataFormattedEnd of change
Start of changeSense data for the specific fileset structure error in a formatted string output.End of change
Start of change%senseDataHexEnd of change
Start of changeSense data for the specific fileset structure error in Big endian hex output.End of change
%snapshotID
Specifies the identifier of the new snapshot.
%snapshotName
Specifies the name of the new snapshot.
Start of change%softLimitEnd of change
Start of changeSpecifies the soft limit of the block.End of change
%storagePool
Specifies the storage pool name for space-related events.
%upNodes[.qualifier]
Specifies a comma-separated list of nodes that are currently up. Only nodes local to the given cluster are listed. Nodes which are in a remote cluster but have temporarily joined the cluster are not included.
%userName
Specifies the user name.
%waiterLength
Specifies the length of the waiter in seconds.

Variables recognized by GPFS Native RAID are documented in IBM Spectrum Scale RAID: Administration.

Variables that represent node identifiers accept an optional qualifier that can be used to specify how the nodes are to be identified. When specifying one of these optional qualifiers, separate it from the variable with a period, as shown here:
variable.qualifier
The value for qualifier can be one of the following:
ip
Specifies that GPFS should use the nodes' IP addresses.
name
Specifies that GPFS should use fully-qualified node names. This is the default.
shortName
Specifies that GPFS should strip the domain part of the node names.
Start of change

Events and supported parameters

Table 1. Global events and supported parameters
Global event Supported parameters
afmFilesetExpired
Triggered when the contents of a fileset expire either as a result of the fileset being disconnected for the expiration timeout value or when the fileset is marked as expired using the AFM administration commands.
%fsName %filesetName %pcacheEvent %homeServer %reason
afmFilesetUnexpired
Triggered when the contents of a fileset become unexpired either as a result of the reconnection to home or when the fileset is marked as unexpired using the AFM administration commands.
%fsName %filesetName %pcacheEvent %homeServer %reason
nodeJoin
Triggered when one or more nodes join the cluster.
%eventNode
nodeLeave
Triggered when one or more nodes leave the cluster.
%eventNode
quorumReached
Triggered when a quorum has been established in the GPFS cluster. This event is triggered only on the cluster manager, not on all the nodes in the cluster.
%quorumNodes
quorumLoss
Triggered when quorum has been lost in the GPFS cluster.
N/A
quorumNodeJoin
Triggered when one or more quorum nodes join the cluster.
%eventNode
quorumNodeLeave
Triggered when one or more quorum nodes leave the cluster.
%eventNode
clusterManagerTakeOver
Triggered when a new cluster manager node is elected. This happens when a cluster first starts up or when the current cluster manager fails or resigns and a new node takes over as cluster manager.
N/A
Table 2. Local events and supported parameters
Local event Supported parameters
Start of changeafmCmdRequeuedEnd of change
Start of changeTriggered during replication when messages are queued up again to be retried later. Queued messages are retried every 15 minutes.End of change
%fsName %filesetName %pcacheEvent %homeServer %reason
Start of changeafmFilesetUnmounted End of change
Start of changeTriggered when the fileset is moved to an Unmounted state because NFS server is not reachable or remote cluster mount is not available for GPFS Native protocol.End of change
%fsName %filesetName %pcacheEvent %homeServer %reason
afmHomeConnected
Triggered when a gateway node connects to the afmTarget of the fileset that it is serving. This event is local on gateway nodes.

%fsName %filesetName %pcacheEvent %homeServer %reason
afmHomeDisconnected
Triggered when a gateway node gets disconnected from the afmTarget of the fileset that it is serving. This event is local on gateway nodes.
%fsName %filesetName %pcacheEvent %homeServer %reason
afmManualResyncComplete
Triggered when a manual resync is completed.
%fsName %filesetName %reason
afmPrepopEnd
Triggered when all the files specified by a prefetch operation have been cached successfully. This event is local on gateway nodes.
%fsName %filesetName %prepopCompletedReads %prepopFailedReads %prepopAlreadyCachedFiles %prepopData
Start of changeafmQueueDroppedEnd of change
Start of changeTriggered when replication encounters an issue that cannot be corrected. After the queue is dropped, next recovery action attempts to fix the error and continue to replicate.End of change
%fsName %filesetName %pcacheEvent %homeServer %reason
Start of changeafmRecoveryFailEnd of change
Start of changeTriggered when recovery fails. The recovery action is retried after 300 seconds. If recovery keeps failing, fileset is moved to a resync state if the fileset mode allows it. End of change
%fsName %filesetName %pcacheEvent %homeServer %reason
afmRecoveryStart
Triggered when AFM recovery starts. This event is local on gateway nodes.
%fsName %filesetName %pcacheEvent %homeServer %reason
afmRecoveryEnd
Triggered when AFM recovery ends. This event is local on gateway nodes.
%fsName %filesetName %pcacheEvent %homeServer %reason
Start of changeafmRPOMissEnd of change
Start of changeTriggered when Recovery Point Objective (RPO) is missed on DR primary filesets, RPO Manager keeps retrying the snapshots. This event occurs when there is lot of data to replicate for the RPO snapshot to be taken or there is an error such as, deadlock and recovery keeps failing.End of change
%fsName %filesetName %pcacheEvent %homeServer %reason
Start of changeccrFileChangeEnd of change
Start of changeTriggered when CCR fput operation takes place.End of change
%ccrObjectName %ccrObjectVersion
Start of changeccrVarChangeEnd of change
Start of changeTriggered when CCR vput operation takes place.End of change
%ccrObjectName %ccrObjectValue %ccrObjectVersion
Start of changedaRebuildFailedEnd of change
Start of changeThe daRebuildFailed callback is generated when the spare space in a declustered array has been exhausted, and vdisk tracks involving damaged pdisks can no longer be rebuilt. The occurrence of this event indicates that fault tolerance in the declustered array has become degraded and that disk maintenance should be performed immediately. The daRemainingRedundancy parameter indicates how much fault tolerance remains in the declustered array.End of change
%myNode %rgName %daName %daRemainingRedundancy
deadlockDetected
Triggered when a node detects a potential deadlock. If the exit code of the registered callback for this event is 1, debug data will not be collected.

See the /usr/lpp/mmfs/samples/deadlockdetected.sample file for an example of using the deadlockDetected event.

%eventName %myNode %waiterLength
deadlockOverload
Triggered when an overload event occurs. The event is local to the node detecting the overload condition.
%eventName %nodeName
diskFailure
Triggered on the file system manager when the status of a disk in a file system changes to down.
%eventName %diskName %fsName
filesetLimitExceeded
Triggered when the file system manager detects that a fileset quota has been exceeded. This is a variation of softQuotaExceeded that applies only to fileset quotas. It exists only for compatibility (and may be deleted in a future version); therefore, using softQuotaExceeded is recommended instead.
%filesetName %fsName %filesetSize %softLimit %hardLimit %inodeUsage %inodeQuota %inodeLimit %quotaEventType
Start of changefsstructEnd of change
Start of changeTriggered when the file system manager detects a file system structure (FS Struct) error.
For more information about FS Struct errors, see the following topics in the IBM Spectrum Scale: Problem Determination Guide:
  • MMFS_FSSTRUCT
  • Reliability, Availability, and Serviceability (RAS) events
  • Information to collect before contacting the IBM® Support Center
End of change
%fsName %fsErr %senseDataFormatted %senseDataHex
Start of changehealthCollapseEnd of change
Start of changeTriggered when the node health declines below the healthCollapseThreshold long enough for the health check thread to notice.End of change
N/A
lowDiskSpace
Start of changeTriggered when the file system manager detects that disk space usage has reached the high occupancy threshold that is specified in the current policy rule. The event is generated every two minutes until the condition no longer exists. For more information, see the topic Using thresholds with external pools in the IBM Spectrum Scale: Administration and Programming Reference.End of change
%storagePool %fsName
noDiskSpace
Triggered when the file system encounters a disk, or storage pool that has run out of space or an inodespace has run out of inodes. An inode space can be an entire file system or an independent fileset. Use the noSpaceEventInterval configuration attribute of the mmchconfig command to control the time interval between two noDiskSpace events. The default value is 120 seconds.

When a storage pool runs out of disk space, %reason is "diskspace", %storagePool is the name of the pool that ran out of disk space, and %filesetName is "UNDEFINED".

When a fileset runs out of inode space, %reason is "inodespace", %filesetName is the name of the independent fileset that owns the affected inode space, and %storagePool is "UNDEFINED".

%storagePool %fsName %reason %filesetName
Start of changensdCksumMismatchEnd of change
Start of changeThe nsdCksumMismatch callback is generated whenever transmission of vdisk data by the NSD network layer fails to verify the data checksum. This can indicate problems in the network between the GPFS client node and a recovery group server. The first error between a given client and server generates the callback; subsequent callbacks are generated for each ckReportingInterval occurrence. End of change
%myNode %ckRole %ckOtherNode %ckNSD %ckReason %ckStartSector %ckDataLen %ckErrorCountClient %ckErrorCountServer %ckErrorCountNSD %ckReportingInterval
Start of changepdFailedEnd of change
Start of changeThe pdFailed callback is generated whenever a pdisk in a recovery group is marked as dead, missing, failed, or readonly.End of change
%myNode %rgName %daName %pdName %pdLocation %pdFru %pdWwn %pdState
Start of changepdPathDownEnd of change
Start of changeThe pdPathDown callback is generated whenever one of the block device paths to a pdisk disappears or becomes inoperative. The occurrence of this event can indicate connectivity problems with the JBOD array in which the pdisk resides.End of change
%myNode %rgName %daName %pdName %pdLocation %pdFru %pdWwn %pdPath
Start of changepdReplacePdiskEnd of change
Start of changeThe pdReplacePdisk callback is generated whenever a pdisk is marked for replacement according to the replace threshold setting of the declustered array in which it resides.End of change
%myNode %rgName %daName %pdName %pdLocation %pdFru %pdWwn %pdState %pdPriority
Start of changepdRecoveredEnd of change
Start of changeThe pdRecovered callback is generated whenever a missing pdisk is rediscovered.

The following parameters are available to this callback: %myNode, %rgName, %daName, %pdName, %pdLocation, %pdFru, and %pdWwn.

End of change
%myNode %rgName %daName %pdName %pdLocation %pdFru %pdWwn
preMount, preUnmount, mount, unmount
These events are triggered when a file system is about to be mounted or unmounted or has been mounted or unmounted successfully. These events are generated for explicit mount and unmount commands, a remount after GPFS recovery and a forced unmount when GPFS panics and shuts down.
%fsName %reason
Start of changepreRGRelinquishEnd of change
Start of changeThe preRGRelinquish callback is invoked on a recovery group server prior to relinquishing service of recovery groups. The rgName parameter may be passed into the callback as the keyword value _ALL_, indicating that the recovery group server is about to relinquish service for all recovery groups it is serving; the rgCount parameter will be equal to the number of recovery groups being relinquished. Additionally, the callback will be invoked with the rgName of each individual recovery group and an rgCount of 1 whenever the server relinquishes serving recovery group rgName.End of change
%myNode %rgName %rgErr %rgCount %rgReason
Start of changepreRGTakeoverEnd of change
Start of changeThe preRGTakeover callback is invoked on a recovery group server prior to attempting to open and serve recovery groups. The rgName parameter may be passed into the callback as the keyword value _ALL_, indicating that the recovery group server is about to open multiple recovery groups; this is typically at server startup, and the parameter rgCount will be equal to the number of recovery groups being processed. Additionally, the callback will be invoked with the rgName of each individual recovery group and an rgCount of 1 whenever the server checks to determine whether it should open and serve recovery group rgName. End of change
%myNode %rgName %rgErr %rgCount %rgReason
preShutdown
Triggered when GPFS detects a failure and is about to shut down.
%reason
preStartup
Triggered after the GPFS daemon completes its internal initialization and joins the cluster, but before the node runs recovery for any file systems that were already mounted, and before the node starts accepting user initiated sessions.
N/A
Start of changepostRGRelinquishEnd of change
Start of changeThe postRGRelinquish callback is invoked on a recovery group server after it has relinquished serving recovery groups. If multiple recovery groups have been relinquished, the callback will be invoked with rgName keyword _ALL_ and an rgCount equal to the total number of involved recovery groups. The callback will also be triggered for each individual recovery group.End of change
%myNode %rgName %rgErr %rgCount %rgReason
Start of changepostRGTakeoverEnd of change
Start of changeThe postRGTakeover callback is invoked on a recovery group server after it has checked, attempted, or begun to serve a recovery group. If multiple recovery groups have been taken over, the callback will be invoked with rgName keyword _ALL_ and an rgCount equal to the total number of involved recovery groups. The callback will also be triggered for each individual recovery group.End of change
%myNode %rgName %rgErr %rgCount %rgReason
Start of changergOpenFailedEnd of change
Start of changeThe rgOpenFailed callback will be invoked on a recovery group server when it fails to open a recovery group that it is attempting to serve. This may be due to loss of connectivity to some or all of the disks in the recovery group; the rgReason string will indicate why the recovery group could not be opened. End of change
%myNode %rgName %rgErr %rgReason
Start of changergPanicEnd of change
Start of changeThe rgPanic callback will be invoked on a recovery group server when it is no longer able to continue serving a recovery group. This may be due to loss of connectivity to some or all of the disks in the recovery group; the rgReason string will indicate why the recovery group can no longer be served.End of change
%myNode %rgName %rgErr %rgReason
sendRequestToNodes
Triggered when a node sends a request for collecting expel-related debug data.

For this event, the %requestType is requestExpelData.

%eventName %requestType %nodeNames
shutdown
Triggered when GPFS completes the shutdown.
%reason
snapshotCreated
Triggered after a snapshot is created, and run before the file system is resumed. This event helps correlate the timing of DMAPI events with the creation of a snapshot. GPFS must wait for snapshotCreated to exit before it resumes the file system, so the ordering of DMAPI events and snapshot creation is known.

The %filesetName is the name of the fileset whose snapshot was created. For file system level snapshots that affect all filesets, %filesetName is set to global.

%snapshotID %snapshotName %fsName %filesetName
softQuotaExceeded
Triggered when the file system manager detects that a soft quota limit (for either files or blocks) has been exceeded. This event is triggered only on the file system manager. Therefore, this event must be handled on all manager nodes.
 
startup
Triggered after a successful GPFS startup before the node is ready for user initiated sessions. After this event is triggered GPFS proceeds to finish starting including mounting all file systems defined to mount on startup.
N/A
tiebreakerCheck
Triggered when the cluster manager detects a lease timeout on a quorum node before GPFS runs the algorithm that decides if the node will remain in the cluster. This event is generated only in configurations that use tiebreaker disks.
N/A
Start of changetraceConfigChangedEnd of change
Start of changeTriggered when GPFS tracing configuration is changed.End of change
N/A
usageUnderSoftQuota
Triggered when the file system manager detects that quota usage has dropped below soft limits and grace time is reset.
%fsName %filesetName %fsName %quotaId %quotaType %quotaOwnerName %blockUsage %blockQuota %blockLimit %filesUsage %filesQuota %filesLimit
End of change

Options

-S Filename | --spec-file Filename
Specifies a file with multiple callback definitions, one per line. The first token on each line must be the callback identifier.

Exit status

0
Successful completion.
nonzero
A failure has occurred.

Security

You must have root authority to run the mmaddcallback command.

The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a file system in IBM Spectrum Scale: Administration and Programming Reference.

Examples

  1. To register command /tmp/myScript to run after GPFS startup, issue this command:
    mmaddcallback test1 --command=/tmp/myScript --event startup
    The system displays information similar to:
    mmaddcallback: Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.
  2. To register a callback on the NFS servers to export or to unexport a particular file system after it has been mounted or before it has been unmounted, issue this command:
    mmaddcallback NFSexport --command /usr/local/bin/NFSexport --event mount,preUnmount -N nfsserver1, 
    nfsserver2 --parms "%eventName %fsName" --parms "%eventName %fsName"
    The system displays information similar to:
    mmaddcallback: 6027-1371 Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.

Location

/usr/lpp/mmfs/bin