Initializing the Data Management application
All DMAPI APIs must be called from nodes that are in the cluster where the file system is created. DMAPI APIs may not be invoked from a remote cluster.
- The shell script gpfsready invoked by the GPFS daemon during initialization.
- A timeout interval, allowing mount operations to wait for a disposition to be set for the mount event.
During GPFS initialization, the daemon invokes the shell script gpfsready, located in directory /var/mmfs/etc. This occurs as the file systems are starting to be mounted. The shell script can be modified to start or restart the DM application. Upon return from this script, a session should have been created and a disposition set for the mount event. Otherwise, mount operations may fail due to a lack of disposition.
In a multiple-node environment such as GPFS, usually only a small subset of the nodes are session nodes, having DM applications running locally. On a node that is not a session node, the gpfsready script can be modified to synchronize between the local GPFS daemon and a remote DM application. This will prevent mount from failing on any node.
A sample shell script gpfsready.sample is installed in directory /usr/lpp/mmfs/samples.
If no mount disposition has ever been set in the cluster, the first external mount of a DMAPI-enabled file system on each node will activate a timeout interval on that node. Any mount operation on that node that starts during the timeout interval will wait for the mount disposition until the timeout expires. The timeout interval is configurable using the dmapiMountTimeout configuration attribute on the mmchconfig command (the interval can even be made infinite). A message is displayed at the beginning of the wait. If there is still no disposition for the mount event when the timeout expires, the mount operation will fail with an EIO error code. See GPFS configuration attributes for DMAPI for more information on dmapiMountTimeout.