Planning for node roles

When you configure an IBM Storage Scale Erasure Code Edition system, it is important to account both for workload and roles of various nodes.

Each cluster requires manager nodes and quorum nodes. Each recovery group requires a recovery group master. The IBM Storage Scale installation toolkit helps to configure the quorum and the manager node roles.

In addition, more IBM Storage Scale features require more node types:
  • CES services require CES nodes, which can also be part of an IBM Storage Scale Erasure Code Edition recovery group.
  • AFM gateway nodes, which cannot be a part of a recovery group.
  • Transparent cloud tiering (TCT) nodes, which cannot be a part of a recovery group.
  • GUI nodes, which cannot be a part of a recovery group.
  • TSM backup nodes, which cannot be a part of a recovery group.
  • Other (non- IBM Storage Scale Erasure Code Edition) storage types, which cannot be a part of a recovery group.

Before you install IBM Storage Scale Erasure Code Edition, a basic network test must be passed. A freely available open-sourced tool is provided with no warranty and official support from IBM® to help you achieve running the test. Any network that does not run or pass the test must be considered as not suited to install IBM Storage Scale Erasure Code Edition. For more information, see Network requirements and precheck.

When you plan a system, it is best to determine the minimum requirements for IBM Storage Scale RAID to get the performance and capacity needed. Then, add additional hardware as needed to meet your functional requirements with hardware for the various node roles and applications.

As nodes take on more roles, the performance of applications that run on that node might be affected by the operations of those roles. File system and CPU-intensive tasks might run slower on a node that is running as a recovery group master and file system manager than on other nodes in the cluster. There are two strategies to consider when you distribute node roles and workload across a cluster:
  • A small subset of these nodes might be used to act in several of these roles. For example, you might choose three nodes to act as file system managers, recovery group masters, and quorum. Other cluster applications can then avoid these three nodes entirely determining when to run, as these nodes might be more heavily used.
  • Distribute the roles of file system managers and recovery group masters to different nodes across the cluster. In this way, you can use any node in the cluster to run applications, with the expectation that they can be slightly impacted.

The installation toolkit assists with node role selection and configuration during system installation.