Manually installing IBM Spectrum Scale management GUI
The management GUI provides an easy way for the users to configure, manage, and monitor the IBM Spectrum Scale system.
- Installing management GUI by using the installation toolkit. For more information on installing the management GUI by using the installation toolkit, see Installing IBM Spectrum Scale management GUI by using the installation toolkit.
- Manual installation of the management GUI. The following sections provides the details of how to manually install the management GUI.
The prerequisites that are applicable for installing the IBM Spectrum Scale system through CLI are applicable for GUI installation as well. For more information on the prerequisites for installation, see Installation prerequisites.
- The performance tool collector package. This package is placed only on the collector nodes. By default, every GUI node is also used as the collector node to receive performance details and display them in the GUI.
- The performance tool sensor package. This package is applicable for the sensor nodes, if not already installed.
|GUI Platform||Package name|
|SUSE Linux Enterprise Server 12||
|Ubuntu 16 and 18||
|Performance monitoring tool platform||Performance monitoring tool rpms|
|RHEL 7.x x86||
|RHEL 7 s390x||
|RHEL 7.x ppc64||
|RHEL 7.x ppc64 LE||
|SLES12 SP1 s390x||
|SLES12 ppc64 LE||
|Ubuntu 16.04 LTS sensor and collector packages||
|Ubuntu 18.04 LTS sensor and collector packages||
Ensure that the performance tool collector runs on the same node as the GUI.
Yum repository setup
You can use yum repository to manually install the GUI rpm files. This is the preferred way of GUI installation as yum checks the dependencies and automatically installs missing platform dependencies like the postgres module, which is required but not included in the package.
You can install the management GUI either using the package manager (yum or zypper commands) or by installing the packages individually.
Installing management GUI by using package manager (yum or zypper commands)
It is recommended to use this method as the package manager checks the dependencies and automatically installs missing platform dependencies. Issue the following commands to install management GUI:
Red Hat Enterprise Linux
yum install gpfs.gss.pmsensors-5.0.4-x.el7.<arch>.rpm yum install gpfs.gss.pmcollector-5.0.4-x.el7.<arch>.rpm yum install gpfs.java-5.0.4-x.<arch>.rpm yum install gpfs.gui-5.0.4-x.noarch.rpm
zypper install gpfs.gss.pmsensors-5.0.4-x.SLES12.<arch>.rpm zypper install gpfs.gss.pmcollector-5.0.4-x.SLES12.<arch>.rpm zypper install gpfs.java-5.0.4-x.<arch>.rpm zypper install gpfs.gui-5.0.4-x.noarch.rpm
Installing management GUI by using RPM
Issue the following commands for both RHEL and SLES platforms:
rpm -ivh gpfs.java-5.0.4-x.<arch>.rpm rpm -ivh gpfs.gss.pmsensors-5.0.4-x.el7.<arch>.rpm rpm -ivh gpfs.gss.pmcollector-5.0.4-x.el7.<arch>.rpm rpm -ivh gpfs.gui-5.0.4-x.noarch.rpm
Installing management GUI on Ubuntu by using dpkg and apt-get
Issue the following commands for Ubuntu platforms:
dpkg -i gpfs.java_5.0.4-x_<arch>.deb dpkg -i
gpfs.gss.pmcollector_5.0.4-x.<os>_<arch>.debapt-get install postgresql dpkg -i gpfs.gui_5.0.4-x_all.deb
The sensor package must be installed on any additional node that you want to monitor. All sensors must point to the collector node.
Start the GUI
Start the GUI by issuing the systemctl start gpfsgui command.
/usr/lpp/mmfs/gui/cli/mkuser <user_name> -g SecurityAdmin
Enabling performance tools in management GUI
- Install the necessary software packages. Install the collector software package, gpfs.gss.pmcollector, on all GUI nodes. Install the sensor software packages, gpfs.gss.pmsensors, on all nodes, which are supposed to send the performance data.
- Initialize the performance collection. Use the mmperfmon config generate --collectors [node list] command to create an initial performance collection setup on the selected nodes. The GUI nodes must be configured as collector nodes. Depending on the installation type, this configuration might be already completed before. However, verify the existing configuration.
- Enable nodes for performance collection. You can enable nodes to collect performance data by issuing the mmchnode --perfmon -N [SENSOR_NODE_LIST] command. [SENSOR_NODE_LIST] is a comma-separated list of sensor nodes' host names or IP addresses and you can also use a node class. Depending on the type of installation, nodes might have already been configured for performance collection.
- Configure aggregation configuration for the collectors. The collector configuration is stored in
the /opt/IBM/zimon/ZIMonCollector.cfg file. The performance collection tool has
already predefined rules on how data is aggregated once data gets older. By default, four
aggregation domains are created: a raw domain that stores the metrics uncompressed, a first
aggregation domain that aggregates data to 30-second averages, a second aggregation domain that
stores data in 15-minute averages, and a third aggregation domain that stores data in 6-hour
In addition to the aggregation that is done by the performance collector, the GUI might request aggregated data depending on the zoom level of the chart. For details on configuring aggregation, see Configuring the collector.
- Configure the sensors. Several GUI pages display performance data that is collected with the
help of performance monitoring tools. If data is not collected, the GUI shows the error messages
like "No Data Available" or "Objects not found" in the performance charts. Installation using the
spectrumscale installation toolkit
manages the default performance monitoring installation and
configuration. The GUI contextual help that is available on various pages shows performance metric
information. The GUI context-sensitive help also lists the sensor names.
Thepage provides option to configure the sensor configuration and provides hints for collection periods and restriction of sensors to specific nodes.
You can also use the mmperfmon config show command in the CLI to verify the sensor configuration. Use the mmperfmon config update command to adjust the sensor configuration to match your needs. For more information on configuring sensors, see Configuring the sensor.
The local file /opt/IBM/zimon/ZIMonSensors.cfg can be different on every node and the system may change this whenever there is a configuration change. Therefore, this file must not be edited manually when using the automated configuration mode. During distribution of the sensor configuration, the restrict clause is evaluated and the period for all sensors is set to 0 in the /opt/IBM/zimon/ZIMonSensors.cfg file on those nodes that did not match the restrict clause. You can check the local file to confirm that a restrict clause worked as intended.
Capacity-related sensors need manual steps for initialization. It is necessary to configure a single node where the capacity collection must occur. You can do this by using a restrict clause. The restrict clause expects that the node name must be the admin node name as shown by the mmlscluster command. Alternative host names that are known to DNS ignored in the restrict clause.
If this sensor is disabled, the GUI does not show any capacity data. The recommended period is 86400, which means once per day. As this sensor runs mmdf, it is not recommended to use a value less than 10800 (every 3 hours) for GPFSDiskCap.period.
mmperfmon config update GPFSDiskCap.restrict=gui_node GPFSDiskCap.period=86400
To show fileset capacity information, it is necessary to enable quota for all file systems where fileset capacity must be monitored. For information on enabling quota, see the -q option in mmchfs command and mmcheckquota command.
Configuring capacity-related sensors to run on a single-node
Several capacity related sensors should run only on a single node as they collect data for a clustered file system. For example, GPFSDiskCap, GPFSFilesetQuota, GPFSFileset and GPFSPool
It is possible automatically restrict these sensors to a single node. For new installations, capacity-related sensors are automatically configured to a single node where the capacity collection occurs. Use thepage to select the appropriate data collection periods for these sensors.
For the GPFSDiskCap sensor, the recommended period is 86400, which means once per day. As the GPFSDiskCap.period sensor runs mmdf command to get the capacity data, it is not recommended to use a value less than 10800 (every 3 hours). To show fileset capacity information, it is necessary to enable quota for all file systems where fileset capacity must be monitored. For information on enabling quota, see the -q option in the mmchfs command and mmcheckquota command.
Checking GUI and performance tool status
systemctl status gpfsgui.service gpfsgui.service - IBM_GPFS_GUI Administration GUI Loaded: loaded (/usr/lib/systemd/system/gpfsgui.service; disabled) Active: active (running) since Fri 2015-04-17 09:50:03 CEST; 2h 37min ago Process: 28141 ExecStopPost=/usr/lpp/mmfs/gui/bin/cfgmantraclient unregister (code=exited, s tatus=0/SUCCESS) Process: 29120 ExecStartPre=/usr/lpp/mmfs/gui/bin/check4pgsql (code=exited, status=0/SUCCESS) Main PID: 29148 (java) Status: "GSS/GPFS GUI started" CGroup: /system.slice/gpfsgui.service ⋘─29148 /opt/ibm/wlp/java/jre/bin/java -XX:MaxPermSize=256m -Dcom.ibm.gpfs.platform=GPFS -Dcom.ibm.gpfs.vendor=IBM -Djava.library.path=/opt/ibm/wlp/usr/servers/gpfsgui/lib/ -javaagent:/opt/ibm/wlp/bin/tools/ws-javaagent.jar -jar /opt/ibm/wlp/bin/tools/ws-server.jar gpfsgui --clean Apr 17 09:50:03 server-21.localnet.com java: Available memory in the JVM: 484MB Apr 17 09:50:03 server.localnet.com java: Max memory that the JVM will attempt to use: 512MB Apr 17 09:50:03 server.localnet.com java: Number of processors available to JVM: 2 Apr 17 09:50:03 server.localnet.com java: Backend started. Apr 17 09:50:03 server.localnet.com java: CLI started. Apr 17 09:50:03 server.localnet.com java: Context initialized. Apr 17 09:50:03 server.localnet.com systemd: Started IBM_GPFS_GUI Administration GUI. Apr 17 09:50:04 server.localnet.com java: [AUDIT ] CWWKZ0001I: Application / started in 6.459 seconds. Apr 17 09:50:04 server.localnet.com java: [AUDIT ] CWWKF0012I: The server installed the following features: [jdbc-4.0, ssl-1.0, localConnector-1.0, appSecurity-2.0, jsp-2.2, servlet-3.0, jndi-1.0, usr:FsccUserRepo, distributedMap-1.0]. Apr 17 09:50:04 server-21.localnet.com java: [AUDIT ] CWWKF0011I: ==> When you see the service was started anything should be OK !
Issue the systemctl status pmcollector and systemctl status pmsensors commands to know the status of the performance tool.
echo "get metrics mem_active, cpu_idle, gpfs_ns_read_ops last 10 bucket_size 1" | ./zc 127.0.0.1 Result example: 1: server-21.localnet.com|Memory|mem_active 2: server-22.localnet.com|Memory|mem_active 3: server-23.localnet.com|Memory|mem_active 4: server-21.localnet.com|CPU|cpu_idle 5: server-22.localnet.com|CPU|cpu_idle 6: server-23.localnet.com|CPU|cpu_idle 7: server-21.localnet.com|GPFSNode|gpfs_ns_read_ops 8: server-22.localnet.com|GPFSNode|gpfs_ns_read_ops 9: server-23.localnet.com|GPFSNode|gpfs_ns_read_ops Row Timestamp mem_active mem_active mem_active cpu_idle cpu_idle cpu_idle gpfs_ns_read_ops gpfs_ns_read_ops gpfs_ns_read_ops 1 2015-05-20 18:16:33 756424 686420 382672 99.000000 100.000000 95.980000 0 0 0 2 2015-05-20 18:16:34 756424 686420 382672 100.000000 100.000000 99.500000 0 0 0 3 2015-05-20 18:16:35 756424 686420 382672 100.000000 99.500000 100.000000 0 0 6 4 2015-05-20 18:16:36 756424 686420 382672 99.500000 100.000000 100.000000 0 0 0 5 2015-05-20 18:16:37 756424 686520 382672 100.000000 98.510000 100.000000 0 0 0 6 2015-05-20 18:16:38 774456 686448 384684 73.000000 100.000000 96.520000 0 0 0 7 2015-05-20 18:16:39 784092 686420 382888 86.360000 100.000000 52.760000 0 0 0 8 2015-05-20 18:16:40 786004 697712 382688 46.000000 52.760000 100.000000 0 0 0 9 2015-05-20 18:16:41 756632 686560 382688 57.580000 69.000000 100.000000 0 0 0 10 2015-05-20 18:16:42 756460 686436 382688 99.500000 100.000000 100.000000 0 0 0
Node classes used for the management GUI
The IBM Spectrum Scale management GUI automatically creates the following node classes during installation:
GUI_SERVERS: Contains all nodes with a server license and all the GUI nodes
GUI_MGMT_SERVERS: Contains all GUI nodes
Each node on which the GUI services are started is added to these node classes.
For information about removing nodes from these node classes, see Removing nodes from management GUI-related node class.
For information about node classes, see Specifying nodes as input to GPFS commands.