Manually installing IBM Storage Scale management GUI

The management GUI provides an easy way for the users to configure, manage, and monitor the IBM Storage Scale system.

You can install the management GUI by using the following methods:

Prerequisites

The prerequisites that are applicable for installing the IBM Storage Scale system through CLI are applicable for GUI installation as well. For more information, see Installation prerequisites.

The IBM Storage Scale GUI package is also part of the installation package. You need to extract this package to start the installation. The performance tool packages enable the performance monitoring tool that is integrated into the GUI. The following packages are important for performance monitoring tools in the GUI:
  • The performance tool collector package. This package is placed only on the collector nodes. By default, every GUI node is also used as the collector node to receive performance details and display them in the GUI.
  • The performance tool sensor package. This package is applicable for the sensor nodes, if not already installed.
  • iptables is required for the installation of Linux® operating systems. However, it might not be a prerequisite if the administrator configures the firewall for the specific GUI node. For more information, see Firewall recommendations for IBM Storage Scale GUI.
    Note: For GUI installations on RHEL9 you must install nftables. You can run the dnf install nftables command to install nftables.
    However, to make the GUI functional in a system in which iptables are already existing, you must first disable the existing iptables and then enable the nftables.
    • To disable iptables, issue the following command:
      # systemctl disable --now iptables
    • To enable nftables, issue the following command:
      # systemctl enable --now nftables
Note: The GUI must be a homogeneous stack. That is, all packages must be of the same release. For example, do not mix the 5.1.2 GUI rpm with a 5.1.3 base rpm. However, GUI PTFs and efixes can usually be applied without having to install the corresponding PTF or efix of the base package. A GUI PTF and efix is helpful if you want to get rid of a GUI issue without changing anything on the base layer.
The following table lists the IBM Storage Scale GUI and performance tool package that are essential for different platforms.
Table 1. GUI packages essential for each platform
GUI Platform Package name
Red Hat Enterprise Linux (RHEL) 7.x, 8.x and 9.x gpfs.gui-5.2.0-x.noarch.rpm

gpfs.java-5.2.0-x.x86_64.rpm

gpfs.java-5.2.0-x.ppc64le.rpm

gpfs.java-5.2.0-x.s390x.rpm

SUSE Linux Enterprise Server (SLES) 15 gpfs.gui-5.2.0-x.noarch.rpm

gpfs.java-5.2.0-x.x86_64.rpm

gpfs.java-5.2.0-x.ppc64le.rpm

gpfs.java_5.2.0-x_s390x.rpm

Ubuntu 20 gpfs.gui_5.2.0-x_all.deb

gpfs.java_5.2.0-x_amd64.deb

gpfs.java_5.2.0-x_ppc64el.deb

Performance monitoring tool platform Performance monitoring tool rpms
RHEL 9.x x86 gpfs.gss.pmcollector-5.2.0-x.el8.x86_64.rpm
RHEL 8.x x86 gpfs.gss.pmcollector-5.2.0-x.el8.x86_64.rpm

gpfs.gss.pmsensors-5.2.0-x.el8.x86_64.rpm

RHEL 7.x x86 gpfs.gss.pmcollector-5.2.0-x.el7.x86_64.rpm

gpfs.gss.pmsensors-5.2.0-x.el7.x86_64.rpm

RHEL 7 s390x gpfs.gss.pmsensors-5.2.0-x.el7.s390x.rpm

gpfs.gss.pmcollector-5.2.0-x.el7.s390x.rpm

RHEL 8.x ppc64 LE gpfs.gss.pmcollector-5.2.0-x.el8.ppc64le.rpm

gpfs.gss.pmsensors-5.2.0-x.el8.ppc64le.rpm

RHEL 7.x ppc64 LE gpfs.gss.pmcollector-5.2.0-x.el7.ppc64le.rpm

gpfs.gss.pmsensors-5.2.0-x.el7.ppc64le.rpm

SLES 15 x86 gpfs.gss.pmcollector-5.2.0-x.SLES15.x86_64.rpm

gpfs.gss.pmsensors-5.2.0-x.SLES15.X86_64.rpm

SLES 15 s390x gpfs.gss.pmsensors-5.2.0-x.SLES15.s390x.rpm

gpfs.gss.pmcollector-5.2.0-x.SLES15.s390x.rpm

Ubuntu 20.04 LTS sensor and collector packages

gpfs.gss.pmsensors_5.2.0-x.U20.04_amd64.deb

gpfs.gss.pmcollector_5.2.0-x.U20.04_amd64.deb

Ensure that the performance tool collector runs on the same node as the GUI.

Yum repository setup

You can use yum repository to manually install the GUI rpm files. It is the preferred way of GUI installation as yum checks the dependencies and automatically installs missing platform dependencies like the postgres module, which is required but not included in the package.

Installation steps

You can install the management GUI either by using the package manager (yum or zypper commands) or by installing the packages individually.

Installing management GUI by using package manager (yum or zypper commands)

It is recommended to use this method as the package manager checks the dependencies and automatically installs missing platform dependencies. Issue the following commands to install management GUI:

Red Hat® Enterprise Linux

yum install gpfs.gss.pmsensors-5.2.0-x.elx.<arch>.rpm 
yum install gpfs.gss.pmcollector-5.2.0-x.elx.<arch>.rpm
yum install gpfs.java-5.2.0-x.<arch>.rpm
yum install gpfs.gui-5.2.0-x.noarch.rpm

SLES

zypper install gpfs.gss.pmsensors-5.2.0-x.SLES15.<arch>.rpm
zypper install gpfs.gss.pmcollector-5.2.0-x.SLES15.<arch>.rpm
zypper install gpfs.java-5.2.0-x.<arch>.rpm
zypper install gpfs.gui-5.2.0-x.noarch.rpm

Installing management GUI by using RPM

Issue the following commands for both RHEL and SLES platforms:

rpm -ivh gpfs.java-5.2.0-x.<arch>.rpm
rpm -ivh gpfs.gss.pmsensors-5.2.0-x.elx.<arch>.rpm  
rpm -ivh gpfs.gss.pmcollector-5.2.0-x.elx.<arch>.rpm
rpm -ivh gpfs.gui-5.2.0-x.noarch.rpm

Installing management GUI on Ubuntu by using dpkg and apt-get

Issue the following commands for Ubuntu platforms:

dpkg -i gpfs.java_5.2.0-x_<arch>.deb
dpkg -i gpfs.gss.pmsensors_5.2.0-x.<os>_<arch>.deb
dpkg -i gpfs.gss.pmcollector_5.2.0-x.<os>_<arch>.deb
apt-get install postgresql
dpkg -i gpfs.gui_5.2.0-x_all.deb
The sensor package must be installed on any additional node that you want to monitor. All sensors must point to the collector node.
Note: In IBM Storage Scale 5.1.4 you can disable the GUI service after you have installed it without restricting your access to the REST API service. You can configure WEB_GUI_ENABLED = false in the gpfsgui.properties file to access only the REST API service. The WEB_GUI_ENABLED parameter value is set to true by default.

Start the GUI

Start the GUI by issuing the systemctl start gpfsgui command.

Note: After installing the system and GUI package, you need to create the first GUI user to log in to the GUI. This user can create other GUI administrative users to perform system management and monitoring tasks. When you launch the GUI for the first time after the installation, the GUI welcome page provides options to create the first GUI user from the command line prompt by using the /usr/lpp/mmfs/gui/cli/mkuser <user_name> -g SecurityAdmin command.

Enabling performance tools in management GUI

The performance tool consists of sensors that are installed on all nodes that need to be monitored. It also consists of one or more collectors that receive data from the sensors. The GUI expects that a collector runs on a GUI node. The GUI queries the collectors for performance and capacity data. The following steps use the automated approach to configure and maintain performance data collection by using the mmperfmon CLI command. Manually editing the /opt/IBM/zimon/ZIMonSensors.cfg file is not compatible with this configuration mode.
  1. Install the necessary software packages. Install the collector software package, gpfs.gss.pmcollector, on all GUI nodes. Install the sensor software packages, gpfs.gss.pmsensors, on all nodes, which are supposed to send the performance data.
  2. Initialize the performance collection. Use the mmperfmon config generate --collectors [node list] command to create an initial performance collection setup on the selected nodes. The GUI nodes must be configured as collector nodes. Depending on the installation type, this configuration might be already completed before. However, verify the existing configuration.
  3. Enable nodes for performance collection. You can enable nodes to collect performance data by issuing the mmchnode --perfmon -N [SENSOR_NODE_LIST] command. [SENSOR_NODE_LIST] is a comma-separated list of sensor nodes' hostnames or IP addresses and you can also use a node class. Depending on the type of installation, nodes might be configured for performance collection.
  4. Review peer configuration for the collectors. The mmperfmon config update command updates the multiple collectors with the necessary configuration. The collector configuration is stored in the /opt/IBM/zimon/ZIMonCollector.cfg file. This file defines the collector peer configuration and the aggregation rules. If you are using only a single collector, you can skip this step. The GUI must have access to all data from each GUI node. For more information, see Configuring the collector.
  5. Review aggregation configuration for the collectors. The collector configuration is stored in the /opt/IBM/zimon/ZIMonCollector.cfg file. The performance collection tool is configured with predefined rules on how data is aggregated when it gets older. By default, four aggregation domains are created as shown:
    • A raw domain that stores the metrics uncompressed.
    • A first aggregation domain that aggregates data to 30-second averages.
    • A second aggregation domain that stores data in 15-minute averages.
    • A third aggregation domain that stores data in 6-hour averages.

    You must not change the default aggregation configuration as the already collected historical metric information might get lost. You cannot manually edit the /opt/IBM/zimon/ZIMonCollector.cfg file in the automated configuration mode.

    In addition to the aggregation that is done by the performance collector, the GUI might request aggregated data based on the zoom level of the chart. For more information, see Configuring the collector.

  6. Configure the sensors. Several GUI pages display performance data that is collected with the help of performance monitoring tools. If data is not collected, the GUI shows the error messages like "No Data Available" or "Objects not found" in the performance charts. Installation by using the spectrumscale installation toolkit manages the default performance monitoring installation and configuration. The GUI help that is available on the various pages shows performance metric information. The GUI context-sensitive help also lists the sensor names.

    The Services > Performance Monitoring page provides option to configure the sensor configuration and provides hints for collection periods and restriction of sensors to specific nodes.

    You can also use the mmperfmon config show command in the CLI to verify the sensor configuration. Use the mmperfmon config update command to adjust the sensor configuration to match your needs. For more information, see Configuring the sensor.

    The local file /opt/IBM/zimon/ZIMonSensors.cfg can be different on every node and the system might change this path whenever a configuration change occurs. Therefore, this file must not be edited manually when you are using the automated configuration mode. During distribution of the sensor configuration, the restrict clause is evaluated and the period for all sensors is set to 0 in the /opt/IBM/zimon/ZIMonSensors.cfg file. The setting is defined for those nodes that did not match the restrict clause. You can check the local file to confirm that a restrict clause worked as intended.

Configuring capacity-related sensors to run on a single-node

Several capacity-related sensors must run only on a single node as they collect data for a clustered file system. For example, GPFSDiskCap, GPFSFilesetQuota, GPFSFileset and GPFSPool.

It is possible to automatically restrict these sensors to a single node. For new installations, capacity-related sensors are automatically configured to a single node where the capacity collection occurs. An updated cluster, which was installed before ESS 5.3.7 (IBM Storage Scale 5.0.5), might not be configured to use this feature automatically and must be reconfigured. To update the configuration, you can use the mmperfmon config update SensorName.restrict=@CLUSTER_PERF_SENSOR command, where SensorName values include GPFSFilesetQuota, GPFSFileset, GPFSPool, and GPFSDiskCap.

To collect file system and disk level capacity data on a single node that is selected by the system, run the following command to update the sensor configuration.
mmperfmon config update GPFSDiskCap.restrict=@CLUSTER_PERF_SENSOR
If the selected node is in the DEGRADED state, then the CLUSTER_PERF_SENSOR is automatically reconfigured to another node that is in the HEALTHY state. The performance monitoring service is restarted on the previous and currently selected nodes. For more information, see Automatic assignment of single node sensors.
Note: If the GPFSDiskCap sensor is frequently restarted, it can negatively impact the system performance. The GPFSDiskCap sensor can cause a similar impact on the system performance as the mmdf command. Therefore, to avoid using the @CLUSTER_PERF_SENSOR for any sensor in the restrict field of a single node sensor until the node stabilizes in the HEALTHY state, it is advisable to use a dedicated healthy node. If you manually configure the restrict field of the capacity sensors then you must ensure that all the file systems on the specified node are mounted to record file system-related data, like capacity.

Use the Services > Performance Monitoring page to select the appropriate data collection periods for these sensors.

For the GPFSDiskCap sensor, the recommended period is 86400, which means once per day. As the GPFSDiskCap.period sensor runs mmdf command to get the capacity data, it is not recommended to use a value less than 10800 (every 3 hours). To show fileset capacity information, it is necessary to enable quota for all file systems where fileset capacity must be monitored. For more information, see the -q option in the mmchfs command and mmcheckquota command.

To update the sensor configuration for triggering an hourly collection of capacity-based fileset capacity information, run the mmperfmon command as shown in the following example,
mmperfmon config update GPFSFilesetQuota.restrict=@CLUSTER_PERF_SENSOR gui_node GPFSFilesetQuota.period=3600

Checking GUI and performance tool status

Issue the systemctl status gpfsgui command to know the GUI status as shown in the following example.
systemctl status gpfsgui.service
gpfsgui.service - IBM_GPFS_GUI Administration GUI
Loaded: loaded (/usr/lib/systemd/system/gpfsgui.service; disabled)
Active: active (running) since Fri 2015-04-17 09:50:03 CEST; 2h 37min ago
Process: 28141 ExecStopPost=/usr/lpp/mmfs/gui/bin/cfgmantraclient unregister (code=exited, s
tatus=0/SUCCESS)
Process: 29120 ExecStartPre=/usr/lpp/mmfs/gui/bin/check4pgsql (code=exited, status=0/SUCCESS)
Main PID: 29148 (java)
Status: "GSS/GPFS GUI started"
CGroup: /system.slice/gpfsgui.service
⋘─29148 /opt/ibm/wlp/java/jre/bin/java -XX:MaxPermSize=256m -Dcom.ibm.gpfs.platform=GPFS 
-Dcom.ibm.gpfs.vendor=IBM -Djava.library.path=/opt/ibm/wlp/usr/servers/gpfsgui/lib/ 
-javaagent:/opt/ibm/wlp/bin/tools/ws-javaagent.jar -jar /opt/ibm/wlp/bin/tools/ws-server.jar gpfsgui
--clean

Apr 17 09:50:03 server-21.localnet.com java[29148]: Available memory in the JVM: 484MB
Apr 17 09:50:03 server.localnet.com java[29148]: Max memory that the JVM will attempt to use: 512MB
Apr 17 09:50:03 server.localnet.com java[29148]: Number of processors available to JVM: 2
Apr 17 09:50:03 server.localnet.com java[29148]: Backend started.
Apr 17 09:50:03 server.localnet.com java[29148]: CLI started.
Apr 17 09:50:03 server.localnet.com java[29148]: Context initialized.
Apr 17 09:50:03 server.localnet.com systemd[1]: Started IBM_GPFS_GUI Administration GUI.
Apr 17 09:50:04 server.localnet.com java[29148]: [AUDIT ] CWWKZ0001I: Application / 
started in 6.459 seconds.
Apr 17 09:50:04 server.localnet.com java[29148]: [AUDIT ] CWWKF0012I: The server 
installed the following features: [jdbc-4.0, ssl-1.0, localConnector-1.0, appSecurity-2.0, 
jsp-2.2, servlet-3.0, jndi-1.0, usr:FsccUserRepo, distributedMap-1.0].
Apr 17 09:50:04 server-21.localnet.com java[29148]: [AUDIT ] CWWKF0011I: ==> When you see 
the service was started anything should be OK !

Issue the systemctl status pmcollector and systemctl status pmsensors commands to know the status of the performance tool.

You can also check whether the performance tool backend can receive data by using the GUI. As another option, you can also use a command-line performance tool that is called zc, which is available in /opt/IBM/zimon folder. For example,
echo "get metrics mem_active, cpu_idle, gpfs_ns_read_ops last 10 bucket_size 1" | ./zc 127.0.0.1
Result example:
1: server-21.localnet.com|Memory|mem_active
2: server-22.localnet.com|Memory|mem_active
3: server-23.localnet.com|Memory|mem_active
4: server-21.localnet.com|CPU|cpu_idle
5: server-22.localnet.com|CPU|cpu_idle
6: server-23.localnet.com|CPU|cpu_idle
7: server-21.localnet.com|GPFSNode|gpfs_ns_read_ops
8: server-22.localnet.com|GPFSNode|gpfs_ns_read_ops
9: server-23.localnet.com|GPFSNode|gpfs_ns_read_ops
Row Timestamp mem_active mem_active mem_active cpu_idle cpu_idle cpu_idle gpfs_ns_read_ops 
gpfs_ns_read_ops gpfs_ns_read_ops
1 2015-05-20 18:16:33 756424 686420 382672 99.000000 100.000000 95.980000 0 0 0
2 2015-05-20 18:16:34 756424 686420 382672 100.000000 100.000000 99.500000 0 0 0
3 2015-05-20 18:16:35 756424 686420 382672 100.000000 99.500000 100.000000 0 0 6
4 2015-05-20 18:16:36 756424 686420 382672 99.500000 100.000000 100.000000 0 0 0
5 2015-05-20 18:16:37 756424 686520 382672 100.000000 98.510000 100.000000 0 0 0
6 2015-05-20 18:16:38 774456 686448 384684 73.000000 100.000000 96.520000 0 0 0
7 2015-05-20 18:16:39 784092 686420 382888 86.360000 100.000000 52.760000 0 0 0
8 2015-05-20 18:16:40 786004 697712 382688 46.000000 52.760000 100.000000 0 0 0
9 2015-05-20 18:16:41 756632 686560 382688 57.580000 69.000000 100.000000 0 0 0
10 2015-05-20 18:16:42 756460 686436 382688 99.500000 100.000000 100.000000 0 0 0

Node classes used for the management GUI

The IBM Storage Scale management GUI automatically creates the following node classes during installation:

  • GUI_SERVERS: Contains all nodes with a server license and all the GUI nodes
  • GUI_MGMT_SERVERS: Contains all GUI nodes

Each node on which the GUI services are started is added to these node classes.

Nodes can also be removed from Node classes. For more information, see Removing nodes from management GUI-related node class.

For more information about node classes, see Specifying nodes as input to GPFS commands.