Shared Disk (SD)

For the SD cluster configuration, the filesystem disks are directly attached to the cluster nodes. As a result, any user data flows over the SAN, and Spectrum Scale control information over a TCP/IP network (LAN).

With regard to the SUT, this means that the FileNetĀ® File Storage Area was shared among the ECM nodes and the SAN disks for the File Storage Area were directly attached to each of the four ECM nodes. Every ECM node triggered its own disk I/O directly to the SAN and caused at the same time a control information flow over the LAN to maintain data consistency.

Figure 1 shows the four ECM cluster nodes sharing the FileNet Advanced File Storage Area using the Spectrum Scale cluster configuration SD.

Figure 1. ECM cluster with Spectrum Scale SD cluster configuration

This graphic provides an overview of an ECM cluster with Spectrum Scale SD cluster configuration
The following example lists the Spectrum Scale cluster configuration SD for the SUT with its node designations for the ECM nodes (cluster members).
# mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         ECM_4_node
  GPFS cluster id:           714383681xxxxxxxxx
  GPFS UID domain:           ECM_4_node
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
  Repository type:           CCR

 Node  Daemon node name  IP address     Admin node name  Designation
---------------------------------------------------------------------
   1   ECMnod1           10.xxx.xx.xxx  ECMnod1          quorum-manager
   2   ECMnod2           10.xxx.xx.xxx  ECMnod2          quorum
   3   ECMnod3           10.xxx.xx.xxx  ECMnod3          quorum
   4   ECMnod4           10.xxx.xx.xxx  ECMnod4
The Spectrum Scale cluster had four member nodes (ECMnod[1-4]). ECMnod1 had the node designation quorum-manager, which means it acted as the filesystem manager for the cluster and was also in the node pool from which quorum was derived.
  • ECMnod2 and ECMnod3 complemented the node quorum and were both in the pool of quorum nodes.
  • ECMnod4 had the node designation nonquorum-client, which was not explicitly listed in the designation column for the mmlscluster command.
Therefore, ECMnod[1-3] had server roles and ECMnod4 a client role.

The mmlsnsd command displays network shared disk information for a Spectrum Scale cluster. In the case of SD, directly attached is usually reported in the NSD servers column.

Here is an example of SAN disks for the FileNet File Storage Area that are directly attached to a node:
# mmlsnsd

File system     Disk name      NSD servers 
---------------------------------------------------------------------------
FSData          nsd14cb        (directly attached) 
FSData          nsd15cb        (directly attached) 
FSData          nsd14cc        (directly attached) 
FSData          nsd15cc        (directly attached) 
FSData          nsd14cd        (directly attached) 
FSData          nsd15cd        (directly attached)

In the above example, the SAN disks for the filesystem FSData (which was used as FileNet File Storage Area) were directly attached to each ECM node. The mmlsnsd command would give the same output on all four ECM nodes.

The next example shows the filesystem FSdata mounted on all four ECM nodes.
# mmlsmount FSData -L                                      
File system FSData is mounted on 4 nodes:
  10.xxx.xx.xxx   ECMnod1                  
  10.xxx.xx.xxx   ECMnod2                  
  10.xxx.xx.xxx   ECMnod3                  
  10.xxx.xx.xxx   ECMnod4

In the above example, Spectrum Scale allows the filesystem FSDdata to be mounted on all four ECM nodes so that FileNet File Storage Area can be shared.

In the following example, the Spectrum Scale pagepool sizes for ECM nodes are listed:
# mmlsconfig pagepool
pagepool 2G  

Every node in a Spectrum Scale cluster has its own pagepool. The size of the pagepool can vary according to the intended role or task of the node. The default pagepool size for Spectrum Scale version 4.2 is 1 GiB.

For the SUT: For the SD cluster configuration, the pagepool size was set to 2 GiB on all four ECM nodes. This pagepool size was chosen by taking into account:
  • The available memory on the ECM nodes.
  • The used ECM workload.