Manually installing IBM Spectrum Scale management GUI

The management GUI provides an easy way for the users to configure, manage, and monitor the IBM Spectrum Scale system.

You can install the management GUI by using the following methods:

Prerequisites

The prerequisites that are applicable for installing the IBM Spectrum Scale system through CLI are applicable for GUI installation as well. For more information on the prerequisites for installation, see Installation prerequisites.

The IBM Spectrum Scale GUI package is also part of the installation package. You need to extract this package to start the installation. The performance tool packages are also required to enable the performance monitoring tool that is integrated into the GUI. The following packages are required for performance monitoring tools in GUI:
  • The performance tool collector package. This package is placed only on the collector nodes. By default, every GUI node is also used as the collector node to receive performance details and display them in the GUI.
  • The performance tool sensor package. This package is applicable for the sensor nodes, if not already installed.
Note: The GUI must be a homogeneous stack. That is, all packages must be of the same release. For example, do not mix the 5.0.4 GUI rpm with a 5.0.3 base rpm. However, GUI PTFs and efixes can usually be applied without having to install the corresponding PTF or efix of the base package. This is helpful if you just want to get rid of a GUI issue without changing anything on the base layer.
The following table lists the IBM Spectrum Scale GUI and performance tool package that are required for different platforms.
Table 1. GUI packages required for each platform
GUI Platform Package name
RHEL 7.x and 8.x gpfs.gui-5.0.5-x.noarch.rpm

gpfs.java-5.0.5-x.x86_64.rpm

gpfs.java-5.0.5-x.ppc64.rpm

gpfs.java-5.0.5-x.ppc64le.rpm

gpfs.java-5.0.5-x.s390x.rpm

SUSE Linux Enterprise Server 12 gpfs.gui-5.0.5-x.noarch.rpm

gpfs.java-5.0.5-x.x86_64.rpm

gpfs.java-5.0.5-x.ppc64le.rpm

gpfs.java_5.0.5-x_s390x.rpm

Ubuntu 16 and 18 gpfs.gui_5.0.5-x_all.deb

gpfs.java_5.0.5-x_amd64.deb

gpfs.java_5.0.5-x_s390x.deb

gpfs.java_5.0.5-x_ppc64el.deb

Performance monitoring tool platform Performance monitoring tool rpms
RHEL 8.x x86 gpfs.gss.pmcollector-5.0.5-x.el8.x86_64.rpm

gpfs.gss.pmsensors-5.0.5-x.el8.x86_64.rpm

RHEL 7.x x86 gpfs.gss.pmcollector-5.0.5-x.el7.x86_64.rpm

gpfs.gss.pmsensors-5.0.5-x.el7.x86_64.rpm

RHEL 7 s390x gpfs.gss.pmsensors-5.0.5-x.el7.s390x.rpm

gpfs.gss.pmcollector-5.0.5-x.el7.s390x.rpm

RHEL 7.x ppc64 gpfs.gss.pmcollector-5.0.5-x.el7.ppc64.rpm

gpfs.gss.pmsensors-5.0.5-x.el7.ppc64.rpm

RHEL 7.x ppc64 LE gpfs.gss.pmcollector-5.0.5-x.el7.ppc64le.rpm

gpfs.gss.pmsensors-5.0.5-x.el7.ppc64le.rpm

SLES12 x86 gpfs.gss.pmcollector-5.0.5-x.SLES12.x86_64.rpm

gpfs.gss.pmsensors-5.0.5-x.SLES12.X86_64.rpm

SLES12 SP1 s390x gpfs.gss.pmsensors-5.0.5-x.SLES12.1.s390x.rpm

gpfs.gss.pmcollector-5.0.5-x.SLES12.1.s390x.rpm

SLES12 ppc64 gpfs.gss.pmcollector-5.0.5-x.SLES12.ppc64.rpm

gpfs.gss.pmsensors-5.0.5-x.SLES12.ppc64.rpm

SLES12 ppc64 LE gpfs.gss.pmcollector-5.0.5-x.SLES12.ppc64le.rpm

gpfs.gss.pmsensors-5.0.5-x.SLES12.ppc64le.rpm

Ubuntu 16.04 LTS sensor and collector packages

gpfs.gss.pmsensors_5.0.5-x.U16.04_amd64.deb

gpfs.gss.pmcollector_5.0.5-x.U16.04_amd64.deb

Ubuntu 18.04 LTS sensor and collector packages

gpfs.gss.pmsensors_5.0.5-x.U18.04_amd64.deb

gpfs.gss.pmcollector_5.0.5-x.U18.04_amd64.deb

Ensure that the performance tool collector runs on the same node as the GUI.

Yum repository setup

You can use yum repository to manually install the GUI rpm files. This is the preferred way of GUI installation as yum checks the dependencies and automatically installs missing platform dependencies like the postgres module, which is required but not included in the package.

Installation steps

You can install the management GUI either using the package manager (yum or zypper commands) or by installing the packages individually.

Installing management GUI by using package manager (yum or zypper commands)

It is recommended to use this method as the package manager checks the dependencies and automatically installs missing platform dependencies. Issue the following commands to install management GUI:

Red Hat Enterprise Linux®

yum install gpfs.gss.pmsensors-5.0.5-x.el7.<arch>.rpm 
yum install gpfs.gss.pmcollector-5.0.5-x.el7.<arch>.rpm
yum install gpfs.java-5.0.5-x.<arch>.rpm
yum install gpfs.gui-5.0.5-x.noarch.rpm

SLES

zypper install gpfs.gss.pmsensors-5.0.5-x.SLES12.<arch>.rpm
zypper install gpfs.gss.pmcollector-5.0.5-x.SLES12.<arch>.rpm
zypper install gpfs.java-5.0.5-x.<arch>.rpm
zypper install gpfs.gui-5.0.5-x.noarch.rpm

Installing management GUI by using RPM

Issue the following commands for both RHEL and SLES platforms:

rpm -ivh gpfs.java-5.0.5-x.<arch>.rpm
rpm -ivh gpfs.gss.pmsensors-5.0.5-x.el7.<arch>.rpm  
rpm -ivh gpfs.gss.pmcollector-5.0.5-x.el7.<arch>.rpm
rpm -ivh gpfs.gui-5.0.5-x.noarch.rpm

Installing management GUI on Ubuntu by using dpkg and apt-get

Issue the following commands for Ubuntu platforms:

dpkg -i gpfs.java_5.0.5-x_<arch>.deb
dpkg -i gpfs.gss.pmsensors_5.0.5-x.<os>_<arch>.deb
dpkg -i gpfs.gss.pmcollector_5.0.5-x.<os>_<arch>.deb
apt-get install postgresql
dpkg -i gpfs.gui_5.0.5-x_all.deb

The sensor package must be installed on any additional node that you want to monitor. All sensors must point to the collector node.

Start the GUI

Start the GUI by issuing the systemctl start gpfsgui command.

Note: After installing the system and GUI package, you need to create the first GUI user to log in to the GUI. This user can create other GUI administrative users to perform system management and monitoring tasks. When you launch the GUI for the first time after the installation, the GUI welcome page provides options to create the first GUI user from the command-line prompt by using the following command: /usr/lpp/mmfs/gui/cli/mkuser <user_name> -g SecurityAdmin

Enabling performance tools in management GUI

The performance tool consists of sensors that are installed on all nodes that need to be monitored. It also consists of one or more collectors that receive data from the sensors. The GUI expects that a collector runs on a GUI node. The GUI queries the collectors for performance and capacity data. The following steps use the automated approach to configure and maintain performance data collection by using the mmperfmon CLI command. Manually editing the /opt/IBM/zimon/ZIMonSensors.cfg file is not compatible with this configuration mode.
  1. Install the necessary software packages. Install the collector software package, gpfs.gss.pmcollector, on all GUI nodes. Install the sensor software packages, gpfs.gss.pmsensors, on all nodes, which are supposed to send the performance data.
  2. Initialize the performance collection. Use the mmperfmon config generate --collectors [node list] command to create an initial performance collection setup on the selected nodes. The GUI nodes must be configured as collector nodes. Depending on the installation type, this configuration might be already completed before. However, verify the existing configuration.
  3. Enable nodes for performance collection. You can enable nodes to collect performance data by issuing the mmchnode --perfmon -N [SENSOR_NODE_LIST] command. [SENSOR_NODE_LIST] is a comma-separated list of sensor nodes' host names or IP addresses and you can also use a node class. Depending on the type of installation, nodes might have already been configured for performance collection.
  4. Start of changeReview peer configuration for the collectors. The mmperfmon config update command updates the multiple collectors with the necessary configuration. The collector configuration is stored in the /opt/IBM/zimon/ZIMonCollector.cfg file. This file defines the collector peer configuration and the aggregation rules. If you are using only a single collector, you can skip this step. The GUI must have access to all data from each GUI node. For more information, see Configuring the collector.End of change
  5. Configure aggregation configuration for the collectors. The collector configuration is stored in the /opt/IBM/zimon/ZIMonCollector.cfg file. The performance collection tool has already predefined rules on how data is aggregated once data gets older. By default, four aggregation domains are created: a raw domain that stores the metrics uncompressed, a first aggregation domain that aggregates data to 30-second averages, a second aggregation domain that stores data in 15-minute averages, and a third aggregation domain that stores data in 6-hour averages.

    Start of changeYou must not change the default aggregation configuration as the already collected historical metric information might get lost. You cannot manually edit the /opt/IBM/zimon/ZIMonCollector.cfg file in the automated configuration mode. End of change

    In addition to the aggregation that is done by the performance collector, the GUI might request aggregated data depending on the zoom level of the chart. For details on configuring aggregation, see Configuring the collector.

  6. Configure the sensors. Several GUI pages display performance data that is collected with the help of performance monitoring tools. If data is not collected, the GUI shows the error messages like "No Data Available" or "Objects not found" in the performance charts. Installation using the spectrumscale installation toolkit manages the default performance monitoring installation and configuration. The GUI contextual help that is available on various pages shows performance metric information. The GUI context-sensitive help also lists the sensor names.

    The Services > Performance Monitoring page provides option to configure the sensor configuration and provides hints for collection periods and restriction of sensors to specific nodes.

    You can also use the mmperfmon config show command in the CLI to verify the sensor configuration. Use the mmperfmon config update command to adjust the sensor configuration to match your needs. For more information on configuring sensors, see Configuring the sensor

    The local file /opt/IBM/zimon/ZIMonSensors.cfg can be different on every node and the system may change this whenever there is a configuration change. Therefore, this file must not be edited manually when using the automated configuration mode. During distribution of the sensor configuration, the restrict clause is evaluated and the period for all sensors is set to 0 in the /opt/IBM/zimon/ZIMonSensors.cfg file on those nodes that did not match the restrict clause. You can check the local file to confirm that a restrict clause worked as intended.

Configuring capacity-related sensors to run on a single-node

Several capacity related sensors should run only on a single node as they collect data for a clustered file system. For example, GPFSDiskCap, GPFSFilesetQuota, GPFSFileset and GPFSPool.

It is possible to automatically restrict these sensors to a single node. Start of changeFor new installations, capacity-related sensors are automatically configured to a single node where the capacity collection occurs. An updated cluster, which was installed before ESS 5.3.7 (IBM Spectrum Scale 5.0.5), might not be configured to use this feature automatically and must be reconfigured. To update the configuration, you can use the mmperfmon config update SensorName.restrict=@CLUSTER_PERF_SENSOR command, where SensorName values include GPFSFilesetQuota, GPFSFileset, GPFSPool, and GPFSDiskCap.End of change

Start of changeTo collect file system and disk level capacity data on a single node that is selected by the system, run the following command to update the sensor configuration.
mmperfmon config update GPFSDiskCap.restrict=@CLUSTER_PERF_SENSOR
End of change
Start of changeIf the selected node is in the DEGRADED state, then the CLUSTER_PERF_SENSOR is automatically reconfigured to another node that is in the HEALTHY state. The performance monitoring service is restarted on the previous and currently selected nodes. For more information, see Automatic assignment of single node sensors
Note: If the GPFSDiskCap sensor is frequently restarted, it can negatively impact the system performance. The GPFSDiskCap sensor can cause a similar impact on the system performance as the mmdf command. Therefore, to avoid using the @CLUSTER_PERF_SENSOR for any sensor in the restrict field of a single node sensor until the node stabilizes in the HEALTHY state, it is advisable to use a dedicated healthy node. If you manually configure the restrict field of the capacity sensors then you must ensure that all the file systems on the specified node are mounted to record file system-related data, like capacity.
End of change

Use the Services > Performance Monitoring page to select the appropriate data collection periods for these sensors.

For the GPFSDiskCap sensor, the recommended period is 86400, which means once per day. As the GPFSDiskCap.period sensor runs mmdf command to get the capacity data, it is not recommended to use a value less than 10800 (every 3 hours). To show fileset capacity information, it is necessary to enable quota for all file systems where fileset capacity must be monitored. For information on enabling quota, see the -q option in the mmchfs command and mmcheckquota command.

Start of changeTo update the sensor configuration for triggering an hourly collection of capacity-based fileset capacity information, run the mmperfmon command as shown in the following example,
mmperfmon config update GPFSFilesetQuota.restrict=@CLUSTER_PERF_SENSOR gui_node GPFSFilesetQuota.period=3600
End of change

Checking GUI and performance tool status

Issue the systemctl status gpfsgui command to know the GUI status as shown in the following example:
systemctl status gpfsgui.service
gpfsgui.service - IBM_GPFS_GUI Administration GUI
Loaded: loaded (/usr/lib/systemd/system/gpfsgui.service; disabled)
Active: active (running) since Fri 2015-04-17 09:50:03 CEST; 2h 37min ago
Process: 28141 ExecStopPost=/usr/lpp/mmfs/gui/bin/cfgmantraclient unregister (code=exited, s
tatus=0/SUCCESS)
Process: 29120 ExecStartPre=/usr/lpp/mmfs/gui/bin/check4pgsql (code=exited, status=0/SUCCESS)
Main PID: 29148 (java)
Status: "GSS/GPFS GUI started"
CGroup: /system.slice/gpfsgui.service
⋘─29148 /opt/ibm/wlp/java/jre/bin/java -XX:MaxPermSize=256m -Dcom.ibm.gpfs.platform=GPFS 
-Dcom.ibm.gpfs.vendor=IBM -Djava.library.path=/opt/ibm/wlp/usr/servers/gpfsgui/lib/ 
-javaagent:/opt/ibm/wlp/bin/tools/ws-javaagent.jar -jar /opt/ibm/wlp/bin/tools/ws-server.jar gpfsgui
--clean

Apr 17 09:50:03 server-21.localnet.com java[29148]: Available memory in the JVM: 484MB
Apr 17 09:50:03 server.localnet.com java[29148]: Max memory that the JVM will attempt to use: 512MB
Apr 17 09:50:03 server.localnet.com java[29148]: Number of processors available to JVM: 2
Apr 17 09:50:03 server.localnet.com java[29148]: Backend started.
Apr 17 09:50:03 server.localnet.com java[29148]: CLI started.
Apr 17 09:50:03 server.localnet.com java[29148]: Context initialized.
Apr 17 09:50:03 server.localnet.com systemd[1]: Started IBM_GPFS_GUI Administration GUI.
Apr 17 09:50:04 server.localnet.com java[29148]: [AUDIT ] CWWKZ0001I: Application / 
started in 6.459 seconds.
Apr 17 09:50:04 server.localnet.com java[29148]: [AUDIT ] CWWKF0012I: The server 
installed the following features: [jdbc-4.0, ssl-1.0, localConnector-1.0, appSecurity-2.0, 
jsp-2.2, servlet-3.0, jndi-1.0, usr:FsccUserRepo, distributedMap-1.0].
Apr 17 09:50:04 server-21.localnet.com java[29148]: [AUDIT ] CWWKF0011I: ==> When you see 
the service was started anything should be OK !

Issue the systemctl status pmcollector and systemctl status pmsensors commands to know the status of the performance tool.

You can also check whether the performance tool backend can receive data by using the GUI or alternative by using a command line performance tool that is called zc, which is available in /opt/IBM/zimon folder. For example:
echo "get metrics mem_active, cpu_idle, gpfs_ns_read_ops last 10 bucket_size 1" | ./zc 127.0.0.1
Result example:
1: server-21.localnet.com|Memory|mem_active
2: server-22.localnet.com|Memory|mem_active
3: server-23.localnet.com|Memory|mem_active
4: server-21.localnet.com|CPU|cpu_idle
5: server-22.localnet.com|CPU|cpu_idle
6: server-23.localnet.com|CPU|cpu_idle
7: server-21.localnet.com|GPFSNode|gpfs_ns_read_ops
8: server-22.localnet.com|GPFSNode|gpfs_ns_read_ops
9: server-23.localnet.com|GPFSNode|gpfs_ns_read_ops
Row Timestamp mem_active mem_active mem_active cpu_idle cpu_idle cpu_idle gpfs_ns_read_ops 
gpfs_ns_read_ops gpfs_ns_read_ops
1 2015-05-20 18:16:33 756424 686420 382672 99.000000 100.000000 95.980000 0 0 0
2 2015-05-20 18:16:34 756424 686420 382672 100.000000 100.000000 99.500000 0 0 0
3 2015-05-20 18:16:35 756424 686420 382672 100.000000 99.500000 100.000000 0 0 6
4 2015-05-20 18:16:36 756424 686420 382672 99.500000 100.000000 100.000000 0 0 0
5 2015-05-20 18:16:37 756424 686520 382672 100.000000 98.510000 100.000000 0 0 0
6 2015-05-20 18:16:38 774456 686448 384684 73.000000 100.000000 96.520000 0 0 0
7 2015-05-20 18:16:39 784092 686420 382888 86.360000 100.000000 52.760000 0 0 0
8 2015-05-20 18:16:40 786004 697712 382688 46.000000 52.760000 100.000000 0 0 0
9 2015-05-20 18:16:41 756632 686560 382688 57.580000 69.000000 100.000000 0 0 0
10 2015-05-20 18:16:42 756460 686436 382688 99.500000 100.000000 100.000000 0 0 0

Node classes used for the management GUI

The IBM Spectrum Scale management GUI automatically creates the following node classes during installation:

  • GUI_SERVERS: Contains all nodes with a server license and all the GUI nodes
  • GUI_MGMT_SERVERS: Contains all GUI nodes

Each node on which the GUI services are started is added to these node classes.

For information about removing nodes from these node classes, see Removing nodes from management GUI-related node class.

For information about node classes, see Specifying nodes as input to GPFS commands.