IBM Support

Readme and Release notes for release 5.0.2.1 IBM Spectrum Scale 5.0.2.1 Spectrum_Scale_Data_Management-5.0.2.1-ppc64LE-Linux Readme

Fix Readme


Abstract

xxx

Content

Readme file for: Spectrum Scale
Product/Component Release: 5.0.2.1
Update Name: Spectrum_Scale_Data_Management-5.0.2.1-ppc64LE-Linux
Fix ID: Spectrum_Scale_Data_Management-5.0.2.1-ppc64LE-Linux
Publication Date: 26 October 2018
Last modified date: 26 October 2018

Installation information

Download location

Below is a list of components, platforms, and file names that apply to this Readme file.

Fix Download for Linux

Product/Component Name: Platform: Fix:
IBM Spectrum Scale Linux PPC64LE RHEL
Linux PPC64LE SLES
Linux PPC64LE Ubuntu
Spectrum_Scale_Data_Management-5.0.2.1-ppc64LE-Linux

Prerequisites and co-requisites

  • - Prerequisites

    You may use this 5.0.2.1 package to perform a First Time Install or to upgrade from an existing 4.2.0.0 - 5.0.2.0 (If upgrading node by node ) or 3.5.0 - 5.0.2.0 (If you shutdown and upgrade the entire cluster).

Known issues

  • - Problems discovered in IBM Spectrum Scale releases

    None.

Installation information

  • - Downloading Images

    Choose the download option "Download using Download Director" to download the new Spectrum Scale package and place it in any location desired on the install node.
    Note, if you must (not recommended) use download option "Download using your browser (HTTPS)", instead of clicking on the down arrow to the left of the package name, you must right-click on the package name and select the Save Link As.. option. If you just click on the download arrow, the browser will likely hang.

  • - Installing IBM Spectrum Scale update for Linux on Power Little Endian Systems

    After you have downloaded the IBM Spectrum Scale 5.0.2.1 update, follow the steps below to install the fix package:

    1. Ensure the package is executable using ls -l command.

      You should see something similar with permissions like:
           -rwx r--r-- l root root 110885866 Apr 27 15:52 /download_dir/package_name.

      If it's not executable, you can always make the package executable using the following command:
           chmod +x /download_dir/package_name
    2. Extract RPMs and Debian Linux Packages from Self Extracting Package downloaded using following commands:

      For Standard Edition:

      ./Spectrum_Scale_Standard-5.0.2.1-ppc64LE-Linux-install

      For Advanced Edition:

      ./Spectrum_Scale_Advanced-5.0.2.1-ppc64LE-Linux-install

      For Data Management Edition:

      ./Spectrum_Scale_Data_Management-5.0.2.1-ppc64LE-Linux-install

      For Data Access Edition:

      ./Spectrum_Scale_Data_Access-5.0.2.1-ppc64LE-Linux-install

      Optional Package for SLES and RedHat Enterprise Linux:

      • gpfs.docs-5.0.2-1.noarch.rpm
      • gpfs.gss.pmcollector-5.0.2-1.xxx.*.rpm (where xxx is the OS version)
      • gpfs.gss.pmsensors-5.0.2-1.xxx.*.rpm (where xxx is the OS version)
      • gpfs.gui-5.0.2-1.noarch.rpm
      • gpfs.java-5.0.2-1.*.rpm
      • gpfs.callhome-5.0.2-1.xxx.noarch.rpm (where xxx is the OS type)
      • gpfs.callhome-ecc-client-5.0.2-1.noarch.rpm
      • gpfs.kafka-5.0.2-1.*.rpm (x86_64 only)
      • gpfs.librdkafka-5.0.2-1.*.rpm (x86_64 only)
      • gpfs.hdfs-protocol-3.0.0-0.*.rpm (x86_64, ppc64, and ppc64le only)
      • gpfs.tct.client-1.1.6*.rpm (IBM Spectrum Scale Advanced or Data Management Edition only, x86_64 and ppc64le only)
      • gpfs.tct.server-1.1.6*.rpm (IBM Spectrum Scale Advanced or Data Management Edition only, x86_64 and ppc64le only)

      Optional Package for Ubuntu Linux:

      • gpfs.docs_5.0.2-1_all.deb
      • gpfs.gui_5.0.2-1_all.deb
      • gpfs.java_5.0.2-1_*.deb
      • gpfs.callhome_5.0.2-1_all.deb (x86_64 only)
      • gpfs.callhome-ecc-client_5.0.2-1_all.deb (x86_64 only)
      • gpfs.kafka_5.0.2-1_*.deb (x86_64 only)
      • gpfs.librdkafka_5.0.2-1_*.deb (x86_64 only)
      • gpfs.gss.pmcollector_5.0.2-1.xxx_*.deb (where xxx is the OS version)
      • gpfs.gss.pmsensors_5.0.2-1.xxx_*.deb (where xxx is the OS version)
      • gpfs.tct.client-1.1.6*.deb (IBM Spectrum Scale Advanced or Data Management Edition only, x86_64 only)
    3. Follow the installation and migration instructions in your IBM Spectrum Scale Installing and upgrading.
  • - Upgrading GPFS nodes

    In the below instructions, node-by-node upgrade cannot be used to migrate from GPFS 4.1 or prior releases. For example, upgrading from 4.1.1.16 to 5.0.2.1 requires complete cluster shutdown, upgrade install on all nodes and then cluster startup.

    Upgrading GPFS may be accomplished by either upgrading one node in the cluster at a time or by upgrading all nodes in the cluster at once. When upgrading GPFS one node at a time, the below steps are performed on each node in the cluster in a sequential manner. When upgrading the entire cluster at once, GPFS must be shutdown on all nodes in the cluster prior to upgrading.

    When upgrading nodes one at a time, you may need to plan the order of nodes to upgrade. Verify that stopping each particular machine does not cause quorum to be lost or that an NSD server might be the last server for some disks. Upgrade the quorum and manager nodes first. When upgrading the quorum nodes, upgrade the cluster manager last to avoid unnecessary cluster failover and election of new cluster managers.

    1. Prior to upgrading GPFS on a node, all applications that depend on GPFS (e.g. DB2) must be stopped. Any GPFS file systems that are NFS exported must be unexported prior to unmounting GPFS file systems.
    2. Stop GPFS on the node. Verify that the GPFS daemon has terminated and that the kernel extensions have been unloaded (mmfsenv -u). If the command mmfsenv -u reports that it cannot unload the kernel extensions because they are "busy", then the install can proceed, but the node must be rebooted after the install. By "busy" this means that some process has a "current directory" in some GPFS filesystem directory or has an open file descriptor. The freeware program lsof can identify the process and the process can then be killed. Retry mmfsenv -u and if that succeeds then a reboot of the node can be avoided.
    3. Upgrade GPFS as follows(make sure to be in the same directory as the files):

      1. For Linux:

        For SLES or RedHat Enterprise Linux:

        rpm -Fvh gpfs*.rpm
        rpm -ivh gpfs.license.*.rpm (if you are updating from 4.2.1.2 or older version)
        rpm -ivh gpfs.compression.*.rpm (if you are updating from older than 5.0.0.0 version)

        For Debian and Ubuntu Linux:

        dpkg -i gpfs*.deb

        Recompile any GPFS portability layer modules you may have previously compiled. For more information, reference: Building the GPFS portability layer on Linux nodes




      2. For AIX:
        Use the 'inutoc .' command to create a .toc file which will be used by the installp command. The .toc file will be created in the current working directory.
        Once the .toc file is created, Upgrade GPFS using the installp command or via SMIT on the node. If you are in the same directory as the install packages and the .toc file, an example command might be:

        installp -agXYd . gpfs

Additional information

  • - Notices
  • - Package information

    The images listed below and contained in the Self Extracting Package (SE-Package) are maintenance packages for IBM Spectrum Scale. The images are a mix of normal RPM or DEB images that can be directly applied to your system.

    The packages can be used for new install or update from a prior level of IBM Spectrum Scale.

    After all RPMs or DEBs are installed, you have successfully updated your IBM Spectrum Scale product.

    Before installing IBM Spectrum Scale, it is necessary to verify that you have the correct levels of the prerequisite software installed on each node in the cluster. If the correct level of prerequisite software is not installed, see the appropriate installation manual before proceeding with your IBM Spectrum Scale installation.

    For the most up-to-date list of prerequisite software, see the IBM Spectrum Scale FAQ in the IBM® Knowledge Center .

    Update to Version:

    5.0.2.1

    Update from Version:

    4.2.0.0 - 5.0.2.0 (If upgrading node by node )
    3.5.0 - 5.0.2.0 (If you shutdown and upgrade the entire cluster)

    SE Package Content (SLES and RHEL Linux):

    • gpfs.msg.en_US-5.0.2-1.noarch.rpm
    • gpfs.base-5.0.2-1.*.rpm
    • gpfs.gpl-5.0.2-1.noarch.rpm
    • gpfs.docs-5.0.2-1.noarch.rpm
    • gpfs.compression-5.0.2-1.*.rpm
    • gpfs.gskit-8.0.50-86.*.rpm
    • gpfs.gui-5.0.2-1.noarch.rpm
    • gpfs.hdfs-protocol-3.0.0-0.*.rpm (x86_64, ppc64, and ppc64le only)
    • gpfs.java-5.0.2-1.*.rpm
    • gpfs.license.xxx-5.0.2-1.*.rpm (where xxx is the license type)
    • gpfs.callhome-5.0.2-1.xxx.noarch.rpm (where xxx is the OS type)
    • gpfs.callhome-ecc-client-5.0.2-1.noarch.rpm
    • gpfs.gss.pmcollector-5.0.2-1.xxx.*.rpm (where xxx is the OS type)
    • gpfs.gss.pmsensors-5.0.2-1.xxx.*.rpm (where xxx is the OS type)
    • gpfs.kafka-5.0.2-1.*.rpm (x86_64 only)
    • gpfs.librdkafka-5.0.2-1.*.rpm (x86_64 only)
    • gpfs.adv-5.0.2-1.*.rpm (IBM Spectrum Scale Advanced or Data Management Edition only)
    • gpfs.crypto-5.0.2-1.*.rpm (IBM Spectrum Scale Advanced or Data Management Edition only)
    • gpfs.tct.client-1.1.6.*.rpm (IBM Spectrum Scale Advanced or Data Management Edition only, x86_64 and ppc64le only)
    • gpfs.tct.server-1.1.6.*.rpm (IBM Spectrum Scale Advanced or Data Management Edition only, x86_64 and ppc64le only)

    SE Package Content (Ubunut Linux):

    • gpfs.msg.en_us-5.0.2-1_all.deb
    • gpfs.base-5.0.2-1_*.deb
    • gpfs.gpl-5.0.2-1_all.deb
    • gpfs.docs-5.0.2-1_all.deb
    • gpfs.compression-5.0.2-1_*.deb
    • gpfs.gskit_8.0.50-86.*.deb
    • gpfs.gui_5.0.2-1_all.deb
    • gpfs.java_5.0.2-1_*.deb
    • gpfs.kafka_5.0.2-1_*.deb (x86_64 only)
    • gpfs.librdkafka_5.0.2-1_*.deb (x86_64 only)
    • gpfs.license.xxx_5.0.2-1_*.deb (where xxx is the license type)
    • gpfs.gss.pmcollector_5.0.2-1.xxx_*.deb (where xxx is the license type)
    • gpfs.gss.pmsensors_5.0.2-1.xxx_*.deb (where xxx is the license type)
    • gpfs.adv_5.0.2-1_*.deb (IBM Spectrum Scale Advanced or Data Management Edition only)
    • gpfs.tct.client-1.1.6*.deb (IBM Spectrum Scale Advanced or Data Management Edition only, x86_64 only)

    SE-Package contents:

    To view the full list of packages including protocols:

    ./Spectrum_Scale_xxx-5.0.2.1-yyy-Linux-install --manifest (where xxx is the license type and yyy is the arch (ppc64LE, ppc64 or x86_64))

  • - Summary of changes for IBM Spectrum Scale

    Unless specifically noted otherwise, this history of problems fixed for IBM Spectrum Scale 5.0.x applies for all supported platforms.

    Problems fixed in IBM Spectrum Scale 5.0.2.1 [October 19, 2018]

    • The process of mmrestripefs with -b option to rebalance file system hang infinitely.
    • Work around: Resume these suspended disks who were just suspended while rebalancing is in progress.
    • Problem trigger: Start the file system rebalance with mmrestripefs -b command, then suspended some disks.
    • Symptom: File system rebalance operation hang finitely.
    • Platforms affected: ALL Operating System environments.
    • Functional Area affected: 5.0 or later Scale Users with traditional file system configuration.
    • Customer Impact: High. IJ09718
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • If creating a new RDMA connection fails while connecting, the connection is still in INIT state. In this case, the code should check for the INIT state. The current version did not handle this correctly resulting in the assertion message "[X] logAssertFailed: localConnP->useCnt == 1" and stopping the mmfsd process.
    • Work Around: None.
    • Problem trigger: The problem gets triggered, if the underlying operating system level RDMA code returns an error code while creating a new RDMA connection. In this case the GPFS connection setup code gets this error message in the middle of setting up a connection. Since this can happen at any time, for example due to networking issues, GPFS needs to be able to handle this situation.
    • Symptom: Abend/Crash.
    • Platforms affected: ALL Operating System environments.
    • Functional Area affected: RDMA.
    • Customer Impact: High Importance. IJ09770
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • The policy engine aborts the process due to encounter a valid directory as to be processed target but it is not applicable to compress.
    • Work around: Refine the compression policy rule to exclude the directories from the to-be-processed targets.
    • Problem trigger: Files are being compressed through compression policy rules and some directories are selected as valid targets based on policy rule.
    • Symptom: The compression policy rule process is interrupted.
    • Platforms affected: ALL Operating System environments.
    • Functional area affected: compression/policy.
    • Customer Impact: Suggested. IJ10412
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • "LOGSHUTDOWN :1 sgmMsgCopyBlock RPCs are still pending" appears in mmfs.log before the GPFS daemon is shut down.
    • Work Around: None.
    • Problem trigger : mmchdisk, mmrestripefs or mmdeldisk running under low free buffer conditions.
    • Symptom: Abend/Crash.
    • Platforms affected: all.
    • Functional Area affected: All Scale Users.
    • Customer Impact: High Importance. IJ09786
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • File system cleanup can't finish as file system panic. This prevent file system remount and could also prevent node from joint cluster after quorum loss.
    • Work Around: Restart GPFS daemon on the node.
    • Problem trigger: File system panic occurs during certain phase of mmrestripefs or mmchpolicy command.
    • Symptom: Cluster/File System Outage. Node expel/Lost Membership.
    • Platforms affected: All.
    • Functional Area affected: Admin Commands.
    • Customer Impact: High Importance. IJ09787
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Assert going off: !addrdirty or synchedstale or alldirty.
    • Work Around: None.
    • Problem trigger: Certain customer workload can run into the problem in a specific code path when the part of the allocated disk space beyond the end of the file is not zeroed out. It's rare and timing related.
    • Symptom: Abend/Crash.
    • Platforms affected: all.
    • Functional Area affected: All Scale Users.
    • Customer Impact: High Importance IJ09549
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Assert or segmentation fault.
    • Work Around: None.
    • Problem trigger: Manager nodes going down while some of the manager nodes are low in memory in a cluster hosting multiple file systems.
    • Symptom: Abend/Crash.
    • Platforms affected: all.
    • Functional Area affected: All Scale Users.
    • Customer Impact: High Importance. IJ09792
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Daemon crashes when user run mmrestripefs with -b option to rebalance the file system.
    • Work around: Restart the file system rebalance operation and do not add new disks into file system while rebalancing is in progress. Or run the mmrestripefs command with "-b --strict" options to do old stile file system rebalance.
    • Problem trigger: Adding new disks into file system while file system rebalance is in progress.
    • Symptom: Daemon crashed.
    • Platforms affected: ALL Operating System environments.
    • Functional Area affected: 5.0 and later Scale Users with traditional file system configuration.
    • Customer Impact: Critical. IJ09589
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Values like "01" or "02", etc. are accepted as arguments for the "mmces log level" command, but yield to a "No such file or directory" error message finally.
    • Work Around: provide the correct one-digit log level numbers.
    • Problem trigger: any number with leading zeros These values were checked for an integer range between 0 and 3, which was passed. 01, 001 etc. is valid as a numeric '1'. However those values were used in some code branches as strings, where it makes a difference if '1' is used or '01'. So the failure was triggered because of that.
    • Symptom: Error output/message.
    • Platforms affected: Linux Only.
    • Functional Area affected: CES.
    • Customer Impact: has little or no impact on customer operation. IJ09423
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • There would be some threads waiting for the exclusive use of the connection for a long time even though no thread is sending on the connection, for example: Waiting 7192.9293 sec since 07:53:38, monitored, thread 2155 Msg handler ccMsgPing: on ThCond 0x7FE0A80012D0 (InuseCondvar), reason 'waiting for exclusive use of connection for sending msg'.
    • Work around: None.
    • Problem trigger: Lots of threads are waiting for sending on one connection, if reconnect happens at that time, it should cause this long waiter.
    • Symptom: Hang/Deadlock/Unresponsiveness/Long Waiters.
    • Platforms affected: All.
    • Functional Area affected: All.
    • Customer Impact: High Importance. IJ09796
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Fix a protential signal 11 problem that might occurred when running mmrestripefs -r.
    • Work Around: The problem was caused by invalid DAs so chaning the DA manually could fix the problem too.
    • Problem trigger: Users whose files contain invalid DAs and will run mmrestripefs -r are potentially affected.
    • Symptom: Unexpected Results/Behavior.
    • Platforms affected: All.
    • Functional Area affected: All.
    • Customer Impact: Suggested. IJ09548
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Changes is in the port configuration of object services may fail if they do not match to the expected default values.
    • Work Around: None.
    • Problem trigger: Currently the object service have default ports hardcoded declared in the CES code. proxy-server: 8080, account-server: 6202, container-server: 6201, object-server: 6200, object-server-sof: 6203, Whenever one of these settings changes in a newer object distribution, we run in to issues.
    • Symptom: Error output/message.
    • Platforms affected: ALL Linux OS environments (CES nodes).
    • Functional Area affected: System Health.
    • Customer Impact: High Importance, if the port settings differ from the expected defaults. IJ09797
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Race condition between relinquish and migrate threads causing long waiters.
    • Work Around: Restart mmfsd.
    • Problem trigger: If migrate is running and user issues a 'mmvdisk rg delete' then this problem can occur.
    • Symptom: Abend/Crash.
    • Platforms affected: Linux Only.
    • Functional Area affected: GNR / Mestor.
    • Customer Impact: Suggested. IJ09809
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Daemon is hitting SIGBUS error and crashes when using overwrite mode trace on Power 9 system with kernel version 4.14.0-49.9.1 and later. This is a Linux kernel bug on Power 9 CPU system.
    • Work around: Disable the overwrite mode trace.
    • Problem trigger: Enable the overwrite mode tracing on Power 9 system with kernel versions 4.14.0-49.9.1 and later.
    • Symptom: Daemon crashes and file system outages.
    • Platforms affected: Power 9 Linux system with kernel version 4.14.0-49.9.1 and later.
    • Functional Area affected: Overwrite mode tracing on Power 9 Linux system.
    • Customer Impact: Medium. IJ09810
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • The commands "mmlsperfdata" and "mmperfmon" showed an error message at the end of the regular command output, like Exception TypeError: "'NoneType' object is not callable" in
    • Work Around: None.
    • Problem trigger: The issue was reported on Ubuntu, was not seen on RHEL. It showed up whenever mmlsperfdata and mmperfmon was executed. The reason was a Python-internal list which was not cleared, and so the reported error text showed up only right before the program terminated finally.
    • Symptom: Error output/message.
    • Platforms affected: Ubuntu reported, but probably all Operating System environments.
    • Functional Area affected: System Health. Customer Impact: Suggested: has little or no impact on customer operation. IJ09814
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • File system snapshot create or delete commands don't return over long time, when DMAPI operations are busy, then causes the file system outage because file system was quiesced during the snapshot create or delete operation.
    • Work around: Restart Spectrum Scale on DMAPI session node, or wait for the completion of the in-progress DMAPI operations.
    • *Problem trigger: After DMAPI is enabled and being busy with access operations, do snapshot create or delete operations.
    • Symptom: File system outages that no access to it is allowed.
    • Platforms affected: ALL Operating System environments.
    • Functional Area affected: All Scale Users.
    • Customer Impact: High. IJ09558
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • When running with QOS enabled, GPFS daemon may fault with signal 11.
    • Work Around: Disable QOS, until fix can be applied.
    • Problem trigger: UMALLOC returns NULL.
    • Symptom: signal 11 fault in QosIdPoolHistory::setNslots.
    • Platforms affected: All.
    • Functional Area affected: QOS.
    • Customer Impact: Critical for customers using QOS, especially if there are many nodes or pid-stats have been enabled. IJ09571
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Directory prefetch reports some directories are failed to prefetch even though they are cached.
    • Work around: Check if the listed dirs are not really cached.
    • Problem trigger: Directory prefetch.
    • Symptom: Unexpected Results/Behavior.
    • Platforms affected: Linux only.
    • Functional Area affected: AFM.
    • Customer Impact: Suggested. IJ09815
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • When the AFM relationship is rendered Sick because the remote site is not responding for an IW cache fileset and later we try to enable replication again, wherein the relationship is moved out of Sick - there is a possibility of catching an assert designed to catch a certain state of the inode. Removing an out-of-place Assertion.
    • Work around: None.
    • Problem trigger: An unhealthy network between the home and cache, that can elongate operations in the AFM queue sometimes.
    • Symptom: Fileset moves to Unmounted state with message that home is taking long to respond. For IW filesets, such conditions may lead to the Assert when the network stabilizes between the 2 sites.
    • Platforms affected: Linux only.
    • Functional Area affected: AFM.
    • Customer Impact: Suggested. IJ09827
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Under rare circumstances it can happen, that mm-commands talking unexpected long (up to 2 minutes) caused by a slow CCR RPCs between CCR server and client.
    • Work around: None.
    • Problem trigger: CCR server expects a final RPC handshake the client does not provide.
    • Symptom: Performance Impact/Degradation.
    • Platforms affected: Just seen on a Linux OS environment (RHEL).
    • Functional Area affected: Admin Commands and CCR.
    • Customer Impact: High Importance. IJ09552
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • The disk addresses of blocks for a compressed file are corrupted to be as non-compressed file's disk addresses when extending the small file or setting extended attributes to it, thus lead to the compressed file cannot be read or decompressed.
    • Work around: Run offline fsck to fix the corrupted disk address for compressed files.
    • Problem trigger: The mmfsd daemon or the system crashes when a small compressed file is being extended to large file or being set big EAs.
    • Symptom: The compressed files cannot be read or decompressed.
    • Platforms affected: ALL Operating System environments.
    • Functional area affected: File compression.
    • Customer Impact: Critical. IJ10414
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Erroneous display of the event " ib_rdma_port_width_low".
    • Work Around: On each affected node: edit /var/mmfs/mmsysmon/mmsysmonitor.conf file and "add ib_rdma_monitor_portstate = false" to the "[network]" section. restart monitoring with "mmsysmoncontrol restart".
    • Problem trigger: Running Spectrum Scale > 5.0.1 and a IB driver which cause ibportstate to report a LinkWidth of "undefined (19)".
    • Symptom: Unexpected Results/Behavior.
    • Platforms affected: ALL Linux OS environments.
    • Functional Area affected: System Health.
    • Customer Impact: Suggested: has little or no impact on customer operation. IJ09587
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • A four-node cluster with Object service configured and declared grouping for four CES-IPs and nodes showed a "ces_network_affine_ips_not_defined" event for nodes which hosted IPs having additional object attributes assigned. So an IP was hosted in fact, but that warning came up additionally.
    • Work Around: The "ces_network_affine_ips_not_defined" event could be declared as to be "ignored" in the "events" section of the mmsysmonitor.conf file. This skips any triggered warning for this event.
    • Problem trigger: Whenever a single IP address is assigned to a node, based on group membership roles and having an additional object-attribute. In case of multiple assigned IPs the issue does not show up, if there is at least one IP with the correct grouping, but without further object attribute.
    • Symptom: Error output/message.
    • Platforms affected: ALL Linux OS environments (CES nodes).
    • Functional Area affected: System Health.
    • Customer Impact: Medium impact, since the IP addresses are indeed hosted, and only the event is misleading. IJ09560
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • The progress indicator of restripe for user files doesn't match the real processed data.
    • Work around: Set pitWokerThreadsPerNode to 1, but this will slow down the progress of restripe operation.
    • Problem trigger: There are many big files in file system and do restripe operation against it.
    • Symptom: Restripe progress could jump to 100% completion from a very small indicator(e.g 5%).
    • Platforms affected: ALL Operating System environments.
    • Functional Area affected: All Scale Users.
    • Customer Impact: Medium. IJ09829
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • The default value of forceLogWriteOnFdatasync parameter from mm command does not match the value in the daemon. IJ09561
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • A filesystem, which was not mounted at the default mountpoint (as shown by mmlsfs), was reported as "stale mount" which leads to a "false-negative" health state report. The mount procedure was declared in a "mmfsup" script, which is executed at GPFS startup.
    • Work Around: mount the filesystem on the declared default mountpoint or change the declared default mountpoint.
    • Problem trigger: Whenever the declared mountpoint differs from the real mountpoint for a GPFS filesystem.
    • Symptom: Error output/message.
    • Platforms affected: ALL Operating System environments.
    • Functional Area affected: System Health.
    • Customer Impact: High Importance. IJ09562
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • With NSD protocol, incorrect remote cluster mount is panicked when the fileset is not responding. AFM kills the stuck requests on the remote mount by panicking the remote filesystem. If there are multiple remote filesystems, it is possible that remote filesystem panicked may not be correct one for the fileset.
    • Work around: None.
    • Problem trigger: Usage of multiple remote filesystems and the network issues between cache and home.
    • Symptom: Unexpected Results/Behavior.
    • Platforms affected: Linux only.
    • Functional Area affected: AFM.
    • Customer Impact: High Importance. IJ09557
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • User may see a lot of threads are blocked at 'wait for GNR buffers from steal thread' on GNR server side. This is possible to happen when running very heavy small writes in parallel to eat up the GNR buffers very quickly.
    • Work Around: None.
    • Problem trigger: When running very heavy small writes workloads.
    • Symptom: Hang/Deadlock/Unresponsiveness/Long Waiters.
    • Platforms affected: ALL Operating System environments.
    • Functional Area affected: ESS/GNR.
    • Customer Impact: High Importance. IJ09870
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • User may see a lot of threads are blocked at 'wait for GNR buffers from steal thread' on GNR server side with very small GNR buffer setting. This is possible to happen when running very heavy small writes in parallel to eat up the GNR buffers very quickly.
    • Work Around: None.
    • Problem trigger: When running very heavy small writes workload with small GNR buffer setting.
    • Symptom: Hang/Deadlock/Unresponsiveness/Long Waiters.
    • Platforms affected: ALL Operating System environments.
    • Functional Area affected: ESS/GNR.
    • Customer Impact: High Importance. IJ09752
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • This problem can happen when the Object protocol is installed using an external Keystone database, and administrator role in that database is not specified as the all lowercase string admin (e.g. Admin). Although Keystone supports case-insensitive role values, the Object protocol configuration command only checks for the lowercase value. When this condition exists, the Object protocol installation will fail with a message similar to "Swift user does not have admin role in service project".
    • Work around: If this fix is not applied, a work around would be to change the name of the administrator role to be the all lowercase value "admin" so that the Object protocol configuration scripts will match the value correctly.
    • Problem trigger: Object installed using an external Keystone database, where the administrator role "admin" is in all uppercase or mixedcase.
    • Symptom: Upgrade/Install failure.
    • Platforms affected: ALL Linux OS environments.
    • Functional Area affected: Object.
    • Customer Impact: Suggested. IJ09735
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • There is a deadlock where mutex is not released by ping thread which is in while loop and the same time another thread is waiting to acquired this mutex to set the state.
    • Work Around: None.
    • Problem trigger: when homelist is being unregistered and meanwhile other handler trying to register the same homelist.
    • Symptom: Deadlock.
    • Platforms affected: Linux Only.
    • Functional Area affected: AFM.
    • Customer Impact: Suggested. IJ09753
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Lookup and readdir performance issues with AFM ADR after converting the regular independent fileset to the AFM ADR fileset as the asynchronous lookups are sent to gateway node in the application path.
    • Work around: None.
    • Problem trigger: AFM ADR inband conversion.
    • Symptom: Performance Impact/Degradation.
    • Platforms affected: Linux Only.
    • Functional Area affected: AFM.
    • Customer Impact: High Importance. IJ09756
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • The loghome vdisk partition distribution are not even among the disks of the vdisk. This causes the performance degration during IO.
    • Work around: None.
    • Problem trigger: If any disk goes down/fail then uneven partition distribution occurs which causes IO performance degradation.
    • Symptom: Performance Impact/Degradation.
    • Platforms affected: All supported Operation systems.
    • Functional Area affected: ESS/GNR.
    • Customer Impact: High Importance. IJ10416
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • If pdisk corruption occurs, for example if a bad SAS HBA card or bad CPU chip causes silent data corruption on writes to pdisks, then after the problem hardware has been repaired, the system can continue to report misleading "I/O error", "err 110" messages, and may continually resign and recover service of the recovery group, causing recovery from the corruption to take an unexpectedly long time.
    • Work around: None.
    • Problem trigger: The problem is triggered by checksum errors detected on pdisks. This can be triggered by faulty hardware that writes incorrect data to disk without reporting any errors back or it may be caused by a malicious program writing over the disk drives.
    • Symptom: Performance Impact/Degradation.
    • Platforms affected: ALL Operating System environments.
    • Functional Area affected: ESS/GNR.
    • Customer Impact: High Importance. IJ10418
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Not all the error messages are getting printed from mmlsfirmware. In particular a warning should be issued if not all the targeted nodes could be reached. Also put out an informational message if the targeted node doesn't have any components that apply to mmlsfirmare. Such as issuing the command on a client node.
    • Work around: None.
    • Platforms affected: ESS/GSS configurations.
    • Functional Area affected: ESS.
    • Customer Impact: Suggested. IJ10039
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • No audit records are logged for SMB CLI commands.
    • Work Around: None.
    • Problem trigger: Applies to all SMB CLI commands.
    • Symptom: No audit records seen when checking with lscommonevent command or in GUI Command Audit Log.
    • Platforms affected: All.
    • Functional Area affected: SMB GUI.
    • Customer Impact: Suggested - has little or no impact on customer operation. IJ10419
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Under very very low maxFilesToCache(100) and low maxStatCache (4k) settings, certain race windows were exposed which resulted in kernel panic/ daemon crashes on nodes with LROC-devices.
    • Work around: none
    • Problem trigger: On nodes with LROC-devices, under extremely low stat-cache settings, certain race windows are exposed which causes daemon/kernel crashes
    • Symptom: Abend/Crash
    • Platforms affected: x86_64-linux only, those that support LROC
    • Functional Area affected: LROC
    • Customer Impact: Critical (could cause data corruption) IJ10308
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • When running a remote cluster environment and the owning cluster and accessing cluster are at 5.0.0.x or 5.0.1.x code level with File Audit Logging enabled, and the owning cluster is upgraded first to 5.0.2.x and mmchconfig --release=LATEST is run, then the remotely mounted filesystems on the accessing clusters will panic and not be able to mount.
    • Work around: If this happens, users should either upgrade the accessing cluster to the 5.0.2.x code level or disable File Audit Logging on the owning cluster until user is able to upgrade the accessing cluster to the 5.0.2.x code stream.
    • Problem trigger: This issue affects customers with file audit logging enabled on one or more filesystems on an owning cluster at 5.0.0.x or 5.0.1.x code level with the same file system remotely mounted on and accessing cluster at 5.0.0.x or 5.0.1.x code level where the owning cluster is upgraded to code level.
    • Symptom: Unexpected Results/Behavior
    • Platforms affected: x86_64-linux and ppc64le-linux
    • Functional Area affected: File audit logging
    • Customer Impact: Critical IJ10318
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • The reported health state of a component (e.g. FILESYSTEM) which has multiple entities (individual filesystems) is not reported correctly if some of them are HEALTHY, and an other is in TIPS state. The expectation is that the overall state for the component is TIPS in this case.
    • Work Around: None
    • Problem trigger : Have multiple filesystems in HEALTHY state and one or more filesystems in TIPS state. The TIPS state could be reached because the mountpoint of the filesystem is different from its declared mountpoint (check with mmlsfs).
    • Symptom: Error output/message
    • Platforms affected: ALL Operating System environments
    • Functional Area affected: System Health
    • Customer Impact: has little or no impact on customer operation IJ10373
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • After a "mmchnode --ces-disable" of a CES node using the SMB protocol, there are still SMB/CTDB specific files on the system. This may yield to unexpected side effects if those nodes are moved to a different cluster.
    • Work Around: Manuell cleanup of "tdb" files in /var/lib/samba
    • Problem trigger : Run "mmchnode --ces-disable" on a CES node which has the SMB protocol installed. The expectation is that all protocol specific configuration files are removed, but that is not the case. There are remaining "tdb" files which were not deleted.
    • Symptom: Unexpected Results/Behavior
    • Platforms affected: ALL Linux OS environments (CES nodes)
    • Functional Area affected: CES
    • Customer Impact: High Importance: an issue which might cause a degradation of the system in some manner IJ10374
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • GUI will still show SMB Shares after deletion
    • Work Around: None
    • Problem trigger: A SMB Share is deleted through GUI.
    • Symptom: GUI will still show SMB Shares after deletion.
    • Platforms affected: All
    • Functional Area affected: SMB GUI
    • Customer Impact: Suggested - has little or no impact on customer operation. IJ10378
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • GUI activity log files are being partially removed by call home scheduled data collection.
    • Work around: mmcallhome schedule delete --task DAILY, mmcallhome schedule delete --task WEEKLY. Please read the schedules after the issue was fixed.
    • Problem trigger: running daily or weekly call home schedules
    • Symptom: Unexpected Results/Behavior
    • Platforms affected: All
    • Functional Area affected: GUI + Callhome
    • Customer Impact: Suggested (as long as there are no issues with GUI, truncating logs is irrelevant; otherwise this could be really bad) IJ10401
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • RPC messages may be got twice after reconnect, then hit some sanity check, such as the below assert: logAssertFailed: err == E_OK, at dirop.C 6389
    • Work around: None
    • Problem trigger: Network is not good which leads to reconnect happening
    • Symptom: Abend/Crash
    • Platforms affected: ALL Operating System environments
    • Functional Area affected: All
    • Customer Impact: High Importance IJ10471
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Node is expelled from the cluster because message is reported lost after reconnect, like the message below: Message ID 2449 was lost by node IP_ADDR NODE_NAME wasLost 1
    • Work around: None
    • Problem trigger: Messages are pending for more than 30 seconds waiting for replies and network is not good which leads to reconnect happening
    • Symptom: Node expel/Lost Membership
    • Platforms affected: ALL Operating System environments
    • Functional Area affected: All
    • Customer Impact: High Importance IJ10473
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • A few GPFS commands like mmaddnode, mmdelnode or change quorum semantic (mmchnode) may cause the status of systemd mmsdrserv.service to report as failed.
    • Work Around: Reset or ignore the failed mmsdrserv.service status.
    • Problem trigger: mmaddnode, mmdelnode, mmchnode --quorum/--noquorum while GPFS is running
    • Symptom: Error output/message
    • Platforms affected: Linux systems with systemd version 219 or later.
    • Functional Area affected: Admin Commands - systemd
    • Customer Impact: Suggested IJ09554
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • Hadoop component missing from mmhealth when full qualified host names used by Hadoop but short host names are used by GPFS.
    • Work Around: Change the Hadoop configuration to use same host names as the GPFS cluster.
    • Problem trigger: when Hadoop host names differ from gpfs host names
    • Symptom: Missing monitoring
    • Platforms affected: Linux Only
    • Functional Area affected: System Health
    • Customer Impact: Suggested IJ10116
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • AFM with verbs RDMA does not work due to the way AFM changes the thread credentials during the replication.
    • Work Around: None
    • Problem trigger: Always happens when RDMA+AFM is enabled with NSD backend.
    • Symptom: Unexpected Results/Behavior
    • Platforms affected: Linux Only
    • Functional Area affected: AFM
    • Customer Impact: High Importance IJ10398
    • IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
    • AFM revalidation is slower some times in the caching modes.
    • Work Around: None
    • Problem trigger: AFM caching modes are used and readdir is performed on them after the refresh intervals expiration.
    • Symptom: Performance Impact/Degradation
    • Platforms affected: Linux Only
    • Functional Area affected: AFM
    • Customer Impact: High Importance IJ10400
    • This update addresses the following APARs: IJ09423 IJ09548 IJ09549 IJ09552 IJ09554 IJ09557 IJ09558 IJ09560 IJ09561 IJ09562 IJ09571 IJ09587 IJ09589 IJ09735 IJ09752 IJ09753 IJ09756 IJ09758 IJ09770 IJ09786 IJ09787 IJ09792 IJ09795 IJ09796 IJ09797 IJ09809 IJ09810 IJ09814 IJ09815 IJ09827 IJ09829 IJ09870 IJ10039 IJ10116 IJ10308 IJ10318 IJ10373 IJ10374 IJ10378 IJ10398 IJ10400 IJ10401 IJ10412 IJ10414 IJ10416 IJ10418 IJ10419 IJ10471 IJ10473 IJ10565.

    Problems fixed in Spectrum Scale 5.0.2.1 for Protocols include the following:

    • gui: Performance charts break when nodes are removed from cluster under some special and rare condition
    • gui: Start NSD ; Info , OK icons missing on IE11
    • gui: ESS2 in mmvdisk-enabled cluster, replace health disk fail because pdisk is at "ok/suspended" state
    • gui: [FrontEnd DEV] Display write endurance of SSDs
    • gui: GUI create metadata-only file system fail when using all space
    • gui: Files>Quotas: Create quota progress window title shows 'Setting Quota' instead of 'Create Quota'
    • gui: Access>File System ACL>ACT templates : Group(Owning group) text overlaps dropdown box arrow
    • gui: Services> GUI External groups displayed in Groups grid when external user used for GUI log in
    • gui: Node>Node Classes : Modify Node class > member node classes not loaded
    • gui: Protocols>NFS : Edit Access Control grayed on newly created NFS export
    • gui: Fans not displayed in hardware details
    • gui: Node>Node Classes : Modify Node class > member node classes not loaded
    • gui: Protocols>NFS : Edit Access Control grayed on newly created NFS export
    • gui: Fans not displayed in hardware details
    • gui: Node>Node Classes : Modify Node class > member node classes not loaded
    • gui: Check in help files to 5.0.2.1 and GUI master streams
    • gui: CVE-2018-8039: Upgrade WPL to 18.0.0.3
    • gui: RDMA_INTERFACES check fails with NullPointerException
    • gui: Afm option restricts specific user input
    • gui: runtask FILESYSTEMS fails with "Option '--file-audit-log' is incorrect." if minReleaseLevel is too old
    • nfs: Version 2.5.3-ibm028.00
    • nfs: Set op_ctx in state_blocked_lock_caller
    • nfs: Disable GPFS fsal async block lock support
    • nfs: MDCACHE - Release unused new entries
    • nfs: Add stats for NFSv3 operations with min/max/avg latency
    • nfs: Add send queue stats in libntirpc, pulls up a new ntirpc as well
    • nfs: libntirpc pullup: multiple send queues
    • nfs: MDCACHE - Close more export/unexport races
    • nfs: MDCACHE - Initialize dirent structs in entry early
    • nfs: FSAL_MDCACHE: prevent double free on new raced entries
    • smb: winbind serviceability improvement: log idmap range violation
    • smb: Remove Python dependency from gpfs.smb
    • smb: Package cleanup. Remove ldb* tools, NetBIOS name server support, SMB print support, and smbtar command.
    • smb: Improve the mmces service start SMB error message if there are. Add symbolic link for CTDB nodes file.
    • smb: Rebase to Samba 4.6.16
    • obj: Fixed proxy server error when a non-empty directory tries to be removed
    • obj: Addressed lxml vulnerability by using python-defusedxml
    • obj: Return appropriate 404 error for DiskFileError
    • obj: Updated hub for all swift daemons so they do not add multiprocessing support to other daemons
    • obj: Updated spectrum-scale-object rpm and deb packages
    • obj: Updated spectrum-scale-object-selinux rpm package
    • pmswift: Ensure logging is configured before writing log messages
    • pmswift: Updated pmswift rpm and deb package
    • toolkit: support for Data Access edition
    • toolkit: Ubuntu16.04.5 currency support
    • toolkit: Support for file audit logging package installation on ess protocol node/client node
    • callhome: Prevention of GUI logs removal by scheduled data collection
    • callhome: Fixed RC=7 while trying to generate a session ID
    • protocol tracing: switched from using dumpcap to using tcpdump when performing network tracing for better availablility

    Problems fixed in Spectrum Scale 5.0.2.0 for Protocols:

    • Please see the "What's New" page in the IBM Knowledge Center

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"STXKQY","label":"IBM Spectrum Scale"},"Component":"","ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}}]

Document Information

Modified date:
30 October 2018

UID

isg400004178