Limitations of the installation toolkit
Before using the installation toolkit to install IBM Storage Scale and deploy protocols, review the following limitations and workarounds, if any.
Function | Description | Workaround, if any |
---|---|---|
CES groups | The installation toolkit does not support the configuration of CES groups. This causes protocol deployments to fail in multi-network environments. Having CES groups might cause issues with upgrades as well because the installation toolkit is not aware of CES groups during the upgrade. | |
Clusters larger than 64 nodes | The installation toolkit does not restrict the number of nodes in a cluster in which the
toolkit can be used. However, it is designed as a single node server so as the number of nodes
increases, the bandwidth to the installer node decreases, and the latency goes up. This
implementation might cause issues in clusters with more than 64 nodes. Note: If you want to use the installation toolkit in a
cluster larger than 64 nodes,
contact scale@us.ibm.com.
|
|
Clusters without passwordless SSH between all nodes | If clusters are set up in an AdminCentral=True configuration, which is a
widely used configuration, the installation toolkit and protocols might not function
correctly. |
Set up passwordless SSH between all nodes in the cluster and to the nodes themselves using FQDN, IP address, and host name. |
Compression | The installation toolkit does not configure file system compression. | After installation, configure the compression function manually. For more information, see File compression. |
Concurrent upgrade |
The installation toolkit does not support concurrent upgrade. You must plan for an outage depending on your setup. This brief outage prevents mixed code versions from running at the same time and it is also needed in a manual upgrade. Although the upgrade is non-concurrent, data is typically still accessible during the upgrade window. Access might be lost and need to be reestablished multiple times due to how the upgrade procedure is run among nodes. |
|
Configuration change |
The installation toolkit does not support changing the configuration of entities that exist such as file systems and NSDs. For example, changing the block size or the default replication factor for data blocks for an existing file system is not supported. However, the installation toolkit can be used to add nodes, NSDs, or file systems to an existing cluster. The installation does not support authentication reconfiguration. It does not use the authentication section of the cluster definition file during upgrade. |
|
Customer designation of sensor or collector nodes | The installation toolkit does not support customer designation of sensor or collector nodes. The installation toolkit automatically sets up sensor or collectors without allowing the user to choose which nodes have these functions. |
|
Disabling or uninstalling protocols and uninstalling GPFS | The installation toolkit does not support disabling or uninstalling protocols and uninstalling GPFS on an existing GPFS cluster. | Use the manual procedures.
|
ESS and IBM Storage Scale Erasure Code Edition mixed environment | The installation toolkit does not support an environment in which ESS and IBM Storage Scale Erasure Code Edition coexist. | Use the manual procedure to install, deploy, or upgrade IBM Storage Scale Erasure Code Edition in a cluster that contains ESS. |
Encryption | The installation toolkit does not support encrypted file systems. Therefore, installation and deployment by using the installation toolkit do not work if the CES shared root file system or any other file system that the installation toolkit works with is encrypted. | |
EPEL repositories | The installation toolkit does not support the configuration of protocols when EPEL repositories are configured. | Remove or disable EPEL repositories before you use the installation toolkit for installation, deployment, or upgrade. |
File system DMAPI flag set to Yes (-z) installation and deployment |
The file system DMAPI flag is used in IBM Storage Scale for IBM Storage Protect for Space Management and policy management, attaching with an IBM Storage Protect server, and with IBM Spectrum Archive. The installation toolkit does not provide an option to add a DMAPI flag to a new or existing file system. If you need add a DMAPI flag to a file system, use the installation toolkit to create the file system and later set the DMAPI flag manually. |
For information on manual procedures for installation and deployment, see Manually installing the IBM Storage Scale software packages on Linux nodes. |
File system DMAPI flag set to Yes (-z) upgrade |
An upgrade by using the installation toolkit is affected by the presence of the DMAPI flag in a
few ways:
|
|
FPO configuration for disks | The installation toolkit does not support the extra stanza file flags required for FPO setup. | Do one of the following:
OR
|
Federal Information Processing Standard (FIPS) enabled | The installation toolkit does not support environments in which FIPS is enabled. | |
GPG (GNU Privacy Guard) signed package support | The installation toolkit support for packages that are signed with the GPG key has the
following limitation:
|
|
Host-based SSH authentication | The installation toolkit does not support host-based SSH authentication. It supports only key-based SSH authentication. | Either set up key-based SSH authentication temporarily for use with the toolkit, or follow the manual steps in Manually installing the IBM Storage Scale software packages on Linux nodes. |
Kafka packages | To make the IBM Storage Scale cluster ready for file audit logging and watch folder functions, the installation toolkit automatically installs the gpfs.librdkafka package for supported operating systems and hardware architectures. This might lead to errors if the prerequisites for gpfs.librdkafka are not installed. | Ensure that the prerequisite packages for gpfs.librdkafka are installed. For more information, see Requirements, limitations, and support for file audit logging. |
Local host |
|
|
Multiple CES networks | The installation toolkit does not support deployments with multiple CES networks. Attempting deployment in this configuration has a high probability of failure. This is because when the CES address pool has multiple subnets, the command for adding CES address assigns an IP to a CES node that cannot handle that subnet of address, which causes the deployment failure. | |
Multiple clusters | The installation toolkit does not support multiple clusters being defined in the cluster definition. | |
Multi-region object deployment | For a multi-region object deployment, the installation toolkit only sets up the region number not the replication. For information about setting up multi-region object deployment, see Enabling multi-region object deployment initially. | |
NFS or SMB exports configuration | The installation toolkit does not configure any exports on SMB or NFS. | Use the manual procedure. For information about configuring Cluster Export Services and creating exports, see Configuring Cluster Export Services and Managing protocol data exports. |
Node function addition during upgrade | The installation toolkit does not support designating node functionality during upgrade. | To add a function to the cluster or a node, designate this new function using the installation toolkit and proceed with an installation or a deployment. Perform this action either before or after an upgrade. |
Non-English languages in client programs such as PuTTY | The installation toolkit does not support setting the language in client programs such as PuTTY to any language other than English. | Set the language of your client program to English. |
NSD SAN attachment during initial installation | The installation toolkit cannot be used for NSD SAN attachment during initial installation because when adding NSDs using the installation toolkit, a primary and an optional comma-separated list of secondary NSD servers must be designated. |
|
Object protocol with IPv6 configured | The installation toolkit does not support object protocol in a cluster in which IPv6 is configured. | |
Online upgrade of a 2-node cluster without tie-breaker disks | Online upgrade of a 2-node cluster that does not have tie-breaker disks configured is not supported. To do an online upgrade of a 2-node cluster by using the installation toolkit, tie-breaker disks must be configured in the cluster. A 1-node cluster can be upgraded only offline. | Do an offline upgrade. For more information, see Performing offline upgrade or excluding nodes from upgrade by using installation toolkit. |
Package managers other than yum , zypper , or
apt-get |
The installation toolkit requires the use of yum (RHEL),
zypper (SLES), and apt-get (Ubuntu) package managers to
function. |
|
PPC and x86_64 or PPC and s390x or x86_64 and s390x mix | The installation toolkit does not support mixed CPU architecture configurations. | Use the installation toolkit on a subset of nodes that are supported and then manually install or deploy on the remaining nodes. For upgrading a mixed CPU architecture cluster, you can use the installation toolkit in two hops. For more information, see Upgrading mixed CPU architecture cluster. |
Quorum or manager configuration after cluster installation | The installation toolkit allows a user to add -m (to specify a manager node)
and -q (to specify a quorum node) flags to various nodes as they are added. If the
proposed configuration does not match the existing configuration, the installation toolkit does
nothing to change it. |
Manually change node roles by using the mmchnode command. |
Remote mounted file systems |
|
|
Repository proxy | The installation toolkit does not support proxy setups when working with repositories.
yum repolist must not have any failed repos and it must be clean. |
Ensure that there are no stale or failed repositories and that only the base OS repositories are enabled during any installation toolkit activities such as installation, deployment, or upgrade. |
RPMs that have dependencies upon GPFS RPMs and GPFS settings | In an environment where some RPMs have dependencies on base GPFS RPMs or GPFS settings, the installation toolkit cannot be used for installation or upgrade. | |
Running mmchconfig release=LATEST to complete an upgrade | The installation toolkit does not run mmchconfig release=LATEST after an upgrade. This is to give users time to verify an upgrade success and decide if the code level upgrade should be finalized. | Use mmchconfig release=LATEST after an upgrade using the installation toolkit to finalize the upgrade across the cluster. |
Separate admin and daemon network | The installation toolkit does not support separate admin and daemon network. | |
Sudo user | The installation toolkit does not function correctly unless run as root. Running as sudo or as another user does not work. | |
Support for AIX®, Debian, PowerKVM, Windows | The installation toolkit does not support AIX, Debian, PowerKVM, Windows operating systems. If these operating systems are installed on any cluster nodes, do not add these nodes to the installation toolkit. | Use the installation toolkit on a subset of nodes that are supported and then manually perform installation, deployment, or upgrade on the remaining nodes. For information about manual upgrade, see Upgrading. |
Tie-Breaker NSD configuration | The installation toolkit does not configure tie-breaker disks. | Manually set the tie-breaker configuration as required using mmchconfig after completing installation using the toolkit. |
Transparent cloud tiering | The installation toolkit does not install, configure, or upgrade Transparent cloud tiering. | Use the manual procedures. For more information, see Installing transparent cloud tiering (discontinued). |
Unique NSD device configuration |
The installation toolkit relies upon a user having already configured and run the nsddevices sample script provided within a GPFS installation. The mmcrnsd and mmchnsd commands require running of the nsddevices script beforehand. Therefore, the installation toolkit will fail if this is not done by the user. |
|
Upgrade while skipping over versions | The installation toolkit does not support skipping over major or minor versions of IBM Storage Scale releases when doing an online upgrade. For example, if an IBM Storage Scale cluster is at version 4.1.1.x, you cannot use the installation toolkit for an online upgrade directly to version 5.1.x. | Do an offline upgrade from version 4.1.1.x to 5.1.x. For more information, see Performing offline upgrade or excluding nodes from upgrade by using installation toolkit. |
Using the latest Python version on a RHEL operating system | If your are using a RHEL operating system, downloading, installing, or setting Python >=3.12 as the default path causes issues. The latest version of Python may cause failures in the installation toolkit operations because RHEL operating systems do not usually work with the latest version of Python. For example, when the latest Python version was 3.12, RHEL 8.10 used Python 3.6.x and RHEL 9.4 used Python 3.9.x. | For the proper functioning of the installation toolkit, use the recommended version of Python, as described in Software requirements and Preparing to use the installation toolkit. |