Flashes (Alerts)
Abstract
The IBM Storage Scale Cloud-kit command-Line interface (CLI) now supports “Fleet” as technology preview feature in Storage Scale 5.2.0.0 for evaluation, testing, and non-production purposes.
Content
Description:
Rapid expansion of compute infrastructure is critical for businesses today. As the business needs continue to evolve, the ability to quickly scale up or scale down infrastructure can drastically reduce operational costs on cloud while providing reliable application performance.
IBM Storage Scale on GCP via cloudkit now provides fast expansion of compute/client nodes that can instantly access the remote filesystem using POSIX/native GPFS access that can be mounted during boot time. With this feature, businesses can rapidly scale their compute infrastructure and seamlessly access their data, all while enjoying the benefits of the cloud.
Requirements:
Refer to https://www.ibm.com/docs/en/storage-scale/5.2.0?topic=planning-storage-scale-public-clouds, which has the requirements for planning and getting started with cloudkit on GCP:
Getting started:
1.) Create an IBM Storage Scale “Storage-only“ cluster on the GCP, using the following command;
./cloudkit create cluster --deployment-mode "Storage-only"
2.) Create a single node (singleton) IBM Storage Scale “Compute-only” cluster on the GCP, using the following command;
./cloudkit create cluster --deployment-mode "Compute-only"
3.) Create remote mount relationship between the storage and compute cluster, using the following command;
./cloudkit grant filesystem
4.) Create a fleet of compute nodes, using the following command;
You can use this command to specify the number of fleet nodes where the remote mount filesystem will be mounted during the initial boot phase using the native GPFS protocol. This command creates an GCP instance manager group naming “<compute-cluster-name>-elastic-ins-mgr”.
./cloudkit create fleet
5.) If you need to adjust the scaling of your fleet, you can edit the desired number of nodes created by the fleet command. Any new nodes spun up using this method will automatically mount the gpfs filesystem during their mount process.
./cloudkit edit fleet
6.) Delete the fleet nodes using the following command;
./cloudkit delete fleet
7.) Delete the singleton compute cluster and storage cluster using the following command;
./cloudkit revoke filesystem
./cloudkit delete cluster
# Above command needs to be executed twice
once with compute cluster name and again with storage cluster name
Limitations:
1.) Nodes spun using fleet should be treated as short-term (cloud bursting nodes) and are not recommended for long-term usage.
2.) Monitoring of fleet nodes via pmsensors is not available.
3.) For a large number of compute nodes, one needs to plan the subnet CIDR properly, the fleet nodes may not sound due to insufficient IP address availability.
4.) For compute node profiles with smaller network bandwidth, use `mmchconfig sendTimeout` to increase the network timeout.
5.) Fleet command uses an GCS bucket to store its scale cluster configuration data. This bucket should be created or used from the same region of the cluster (an existing cloudkit repository can also be used for this storage).
Note:
We welcome your feedback. If you have any comments, suggestions or questions regarding any of the information provided here, email scale@us.ibm.com.
[{"Type":"MASTER","Line of Business":{"code":"LOB69","label":"Storage TPS"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"STXKQY","label":"IBM Storage Scale"},"ARM Category":[{"code":"a8m3p0000006xeCAAQ","label":"Cloud for IBM Spectrum Scale"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"5.2.0"}]
Was this topic helpful?
Document Information
Modified date:
26 April 2024
UID
ibm17148055