Topic
  • 28 replies
  • Latest Post - ‏2016-09-06T12:42:35Z by gpfs@us.ibm.com
gpfs@us.ibm.com
gpfs@us.ibm.com
344 Posts

Pinned topic GPFS V4.1 Announcements

‏2014-06-17T16:55:14Z |

Watch this thread for announcements on the availability of updates for GPFS v4.1

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2014-06-27T18:09:20Z  

    Abstract

    IBM has identified an issue with GPFS version 3.5.0.16 and later releases that may affect installations that use both a non-English language locale settings and also have Persistent Reserve enabled for the cluster.

    Problem Summary


    On such systems, it is possible that the disk usage information recorded in the main configuration file (/var/mmfs/gen/mmsdrfs) is not correct.  This may result in an improper handling of the PR settings and inability to mount the affected file systems.  The root cause for this problem is corrected in GPFS 3.5.0.19 and GPFS 4.1.0.1 (APAR IV61323)

    Fix


    To see if your system is susceptible to this problem, run the following command

    grep SG_DISKS /var/mmfs/gen/mmsdrfs | awk -F : '{ print $3 " " $5 " " $8 }'

    and examine the reported disk usage information.  If you see 'descOnly' shown for disks that are supposed to contain data or metadata, then your system is affected and you need to correct the problem using the following procedure:

    - install GPFS 3.5.0.19 or GPFS 4.1.0.1

    - for each of the affected file systems run
       mmcommon recoverfs <deviceName>

    If the mmcommon recoverfs command fails because it cannot read the file system descriptor, then you will need to temporarily disable the Persistent Reserve feature:

    1. mmshtudown -a
    2. mmchcconfig usePersistentReserve no
    3. mmstartup -a
    4. mmcommon recoverfs <deviceName>  # repeat for all affected file systems
    5. mmshtudown -a
    6. mmchcconfig usePersistentReserve yes
    7. mmstartup -a

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2014-07-01T18:17:03Z  

    GPFS 4.1.0.1 is now available from IBM Fix Central:                     

    http://www-933.ibm.com/support/fixcentral

    Problems fixed in GPFS 4.1.0.1

    May 30, 2014

    * Fix thread-safe problem in dumping GPFS daemon threads backtrace.
    * Fixes a problem with fsck repair of deleted root directory inode of independent filesets.
    * Fixed a problem in clusters configured for secure communications (cipherListconfiguration variable containing a cipher other than AUTHONLY) which may cause communications between nodes to become blocked.                               
    * After a file system is panicked, new lock range request will notbe accepted.                                        
    * This fix only affects customers running GNR/GSS on Linux, and who have in the past misconfigured their GNR servers by turning the config parameter "numaMemoryInterleave" off, and who experienced IO errors on Vdisks as a result of that misconfiguration. These IO errors can potentially corrupt in-memory metadata of the GNR/GSS server, which can lead to data loss later on. This fix provides a tool that can be used to locate and repair such corruption.                  
    * Remove mmchconfig -N restrictions for aioWorkerThreads and enableLinuxReplicatedAio.                                
    * Fixed problem when reading a clonde child from a snapshot                                                           
    * Fixed a rare race condition causing the assert when two threads are attempting to do a metanode operation at the same time whilethe node is in the process of becoming a metanode.                                                        
    * Fixed a deadlock in a complicated scenario involving restripe,token revoke and exceeding file cache limit.          
    * Fixed race between log recovery and mnodeResign thread                                                              
    * E_VALIDATE errors in the aclFile after node failure                                                                 
    * Deal with stress condition where mmfsd was running out of threads                                                   
    * Fix a problem in log recovery that would cause it to fail when replaying a directory insert record. The error only occurs for filesystems in version 4.1 format, where the hash value of the file name being inserted is the same as an existing file in the directory. The problem is also dependent on the length of the file name, and only happens if the system crashes after the log record is committed, but before the directory contents are flushed.                        
    * Fixed the problem that was caused by a hole in the cleanBufferafter the file system panicked.                       
    * Close a hole that fileset snapshot restore tool (mmrestorefs -j) may cannot restore changed data for a clone child file.                                                                                                                  
    * Fix a rare assert which happens under low disk space situation                                                      
    * Fixed deadlock during mmap pagein                                                                                   
    * Fixed the problem of excessive RPCs to get indirect blocks and the problemof metanode lock starvation involving a huge sized sparse file.                                                                                                 
    * A problem has been fixed where the GPFS daemon terminates abnormallywhen direct I/O and vector I/O (readv/writev) is used on encrypted files,and the data is replicated, or the data must go through an NSD server.                       
    * Fix a potential deadlock when selinux is enabled and FS is dmapi managed.
    * Close a hole that fileset snapshot restore tool (mmrestorefs -j) may cannot restore a snapshot in a race condition that one restore thread is deleting a file but another restore thread is also trying to get file attributes for this file.
    * Fixed a kernel oops that caused by a race in multiple NFS readson the same large file.
    * mmchfirmware command will avoid accessing non-existent disk path.
    * Fix a directory generation mismatch problem in an encrypted secvm file system.
    * shutdown hangs in the kernel trying to acquire revokeLock
    * Apply at your convenience. Even if you hit this bug, an equivalent cleanup is completed later in the command execution.
    * improved stability of daemon-to-daemon communications when cipherList is set to a real cipher (i.e. not AUTHONLY).
    * The serial number of physical disks is now recorded in the GNR event log, and displayed in the mmlspdisk command.
    * GNR on AIX allow  only 32K segment.
    * Fixes a problem with fsck repair of corrupt root directory inode
    * mmbackup tricked by false TSM success messages Mmbackup can be fooled by TSM output when dsmc decides to roll-back a transaction of multiple files being backed up. When the TSM server runs out of data storage space, the current transaction which may hold many files will be rolled back and re-tried with each file separately. The failure of a file to be backed up in this case was not detected because the earlier message from dsmc contained "Normal File --> <path> [Sent]" though it was later rolled back. Fixes in tsbuhelper now detect the failure signature "** Unsuccessfull **" string and instead of simply ignoring these now will revert the changes in the shadow DB for the matching record(s). Hash table keeps track of last change in each record already, so reverting is now a legal state transition for hashed records. Reorganized some debug messages and streamlined some common code to work better. Now find 'failed' string to issue reversion updates as well. Fixed pattern matching in tsbackup33.pl to properly display all "ANS####" messages.
    * Fix RO cache i/o error if mounting fs in ro mode.
    * Don't release mutex if daemon death.
    * Fix the path buffer length calculation to return the correct length for dm_handle_to_path() functions.
    * Fix bug in mmauth that may cause duplicate configure entries and node numbermismatch in configure file.
    * Fix a problem with creating directory if the parent directory has default POSIX ACL.
    * mmbackup fails to read hex env values mmbackup debug values, progress reporting, and possibly other user settings may be presented in decimal or hex, especially the bit-mapped progress and debugging settings. Perl doesn't always interpret the hex values correctly unless converted with the oct() function.
    * Correct an NLS-related problem with mmchdisk and similar commands.
    * This update addresses the following APARs: IV60187 IV60468 IV60469 IV60471 IV60475 IV60478 IV60543 IV60863 IV60864.

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2014-07-09T21:07:22Z  

    Abstract:
    Conflicting advisory locks may result in undetected data corruption.

    Problem Summary:
    Conflicting advisory locks may be granted when applications use fcntl() to acquire advisory
    locks on a GPFS file. After a node acquires an advisory  lock, if another node takes any
    action which triggers an inode token revoke (such as running the chmod command on the file),
    the fcntl token may be released while the fcntl lock is still held.  If another application
    process then asks for a conflicting advisory lock for the same file, that request could be
    granted.  User data could then be corrupted, because the two different application processes
    believe they have exclusive access to the same file range.

    GPFS metadata will not be affected.   Only customers using fcntl advisory locks at the affected GPFS levels could be impacted.

    Users affected (both of the following conditions must apply for customer to be affected):

    • Customers running GPFS service levels 4.1.0.0, or 4.1.0.1.
    • Customer workload includes use of fcntl advisory locks on GPFS files.


    Problem Description:
    See "Problem Summary".

    Recommendation:
    Affected customers should contact IBM Service for an efix containing APAR IV62043 to be applied as soon as possible.   IBM plans to make this fix available in GPFS 4.1.0.2 (APAR IV62043).

    Updated on 2014-07-09T21:15:00Z at 2014-07-09T21:15:00Z by gpfs@us.ibm.com
  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2014-07-16T15:23:17Z  

    Abstract:  
    GPFS may incorrectly permit a disk that is too large for the file system to be added to an existing FPO storage pool, resulting in undetected data corruption of the file system

    Problem Summary:
    GPFS is designed to impose a maximum allowable disk size for disks added after the file system has been created. An integer overflow problem has been discovered in the GPFS block map allocation module that allows GPFS to incorrectly add disks too large to the file system.  A disk that should have failed this maximum size check could be incorrectly added to an FPO enabled storage pool.  If a disk is added with size equal to or larger than 8 times the size of the disks used at the time the file system was originally created and the file system is utilizing FPO enabled storage pools, data blocks will be corrupted when data is written to the new disk.

    Problem Description:
    See Problem Summary

    Users affected:   
    File systems that have added a disk or disks that are equal to or larger than 8 times the size of the disk used at the time the file system was originally created and GPFS has FPO enabled storage pools.  

    Please check the following conditions to assess if your file system is at risk:

    1. Is the file system FPO enabled?  Use the following command to determine.
    mmlspool <fsname> <poolname> -L | grep allowWriteAffinity
     

     If it's yes, then this is a FPO pool.

    2. Is the file system metadata block size larger than 256KB?  Use the following command to determine.
    mmlspool <fsname> system -L |grep blockSize

    3. Have you added disks (or plan to add disks) to the file system that are equal to or larger than 8 times of the original disks  via mmadddisk  command?
    The new disks must be equal to or larger than 8 times in capacity than the largest disk existing at file system creation time. Use the following mmdf command to review sizes of disks belonging to the file system. Review the "disk size" column to determine disks that are equal to or larger than 8 times of size of the original disks in the file system.

    mmdf <filesystem>
     
    Below is an example from an affected system. The disk data01node04 was added after the file system was created and its size (3.5TB)  is more than 8 times of the original largest disk size (296 GB).
     
    # mmdf gpfs1
    disk                           disk size         failure     holds       holds        free KB                              free KB
    name                           in KB             group   metadata   data      in full blocks                    in fragments
    ---------------                -------------        --------   --------        -----       --------------------                 -------------------
    Disks in storage pool: system (Maximum disk size allowed is 9.0 TB)
    data01node01        296874976     1001      yes            yes       293232640 ( 99%)                 2144 ( 0%)
    data02node01        296874976     1001      yes            yes       293307392 ( 99%)                  2784 ( 0%)
    data02node02        296874976     1002      yes            yes       296857600 (100%)                 1984 ( 0%)
    data01node02        296874976     1002      yes            yes       296856576 (100%)                1984 ( 0%)
    data02node03        296874976     1003      yes            yes       293314560 ( 99%)                  2784 ( 0%)
    data01node03        296874976     1003      yes            yes       293234688 ( 99%)                   2144 ( 0%)
    data01node04       3512693760     1004     yes            yes      1306125312 ( 37%)     404892224 (12%)
                                        -------------                                                          --------------------             -------------------
    (pool total)              5293943616                                                   3072928768 ( 58%)     404906048 ( 8%)

    4. If you are still unable to determine, you will need to unmount the file system and run mmfsck. Please contact IBM support for assistance and further details to run mmfsck.


    Required Actions:

    IBM recommends that GPFS FPO enabled users apply a fix as soon as it is available and before adding new disks to the file system.  


    For GPFS 3.5, the fix is available in GPFS 3.5.0.19 (IV60817) on the Fix Central site (http://www.ibm.com/eserver/support/fixes/).  


    For GPFS 4.1, IBM plans to make the fix available in GPFS 4.1.0-2 (IV62418) on the Fix Central site. Until the fix for 4.1 is available, IBM has an efix. Please contact IBM support if you need the efix.  

    If you have determined that you are affected, please call IBM support as soon as possible for assistance with data recovery.

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2014-08-08T13:13:22Z  

    GPFS 4.1.0.2 is now available from IBM Fix Central:                     

    http://www-933.ibm.com/support/fixcentral

    Problems fixed in GPFS 4.1.0.2

    August 5, 2014

    * Add tsm server selection option and improve messages.
    * write(fd,NULL,bs) gives rc -1 and inconsistent behavior Added a check in code to validate if user provided buffer is NULL. If user provided buffer for rea/write system call is NULL than error is returned much earlier in code.         
    * Fix various problems with RDMA reconnection.                                                                        
    * Fix a rare case live lock which can happen when FPO file system is in low space situation.                          
    * Fix two integer overflow problems of GPFS block map allocation module which caused by adding larger disk into existing file system. The problem can lead to block lost and data corruption.                                               
    * Avoid very rare log recovery failure after restripe of snapshot files.                                              
    * Prevent GPFS file system program mmfsd crash on a GNR/GSS system while deleting a log vdisk.                        
    * Fix a problem in locking code that can cause GPFS daemon assert under certain rare race condition. The chance is slightly higher under 4.1.                                                                                               
    * Prevent file system errors in the face of too many disk errors.                                                     
    * Offline fsck fileset 0 record false positive on v3.3 and older filesystem.                                          
    * Fix a defect in the fileset snapshot restore tool when the tool tries to restore attributes of directories which they have been deleted after we create fileset snapshot.                                                                 
    * Apply if you see tsapolicy failing immediately after a helper node goes down.                                       
    * Eliminate FSSTRUCT errors from occuring during image restore process. Prevent gpfsLookup() common function from proceeding if stripe group is being image restored.                                                                       
    * Fix a node crash defect in gpfs_ireaddirx GPFS API when we use it to list changed dentry for a directory which has data in inode.                                                                                                         
    * Improved stability of mmfsadm dump tscomm.                                                                          
    * Install this fix if you have non-standard enclosure / drawer hardware not found in GSS systems.                     
    * Fix a defect in the fileset snapshot restore tool when it tries to restore clone file which has been deleted after we create snapshot.                                                                                                    
    * Ignore Grace msg on nodes that do not support Ganesha.                                                              
    * Fixed hung problem due ro lock overflow.                                                                            
    * Fix a problem where user registered callback is unexpectedly invoked when using mount instesad of mmmount.          
    * Fix a generation number mismatch defect when we create fileset in GPFS secvm file system.                           
    * Fixed a race condition that could lead to an assertion failure when mmpmon is used.                                 
    * Fixed Assert 'filesetP->hasInodeSpace == 0' in offline fsck.                                                        
    * Fixed problem in multi acquire and release with FGDL.                                                               
    * When there is a GPFS failure return EUNATCH to Ganesha.                                                             
    * Fileset snapshot restore tool restores dir attributes more effectively.                                             
    * Without this fix a setup with 4 or more drawers in an enclosure may not be able to survive the loss of the enclosure even though mmlsrecoverygruop -L states that disk group fault tolerance can survive an enclosure loss.               
    * Fix a defect that the restore tool cannot sync the restoring fileset when the file system manager node of the restoring fileset is running in GPFS 4.1.0.0 and the restore command is running in a node which runs upper version.         
    * Fixed online fsck assert after block allocation map export.                                                         
    * Must make sure that all the interfaces are enabled.                                                                 
    * Fixed Ganesha not using right interface in RHEL6.5.                                                                 
    * Fix GPFS_FCNTL_GET_DATABLKDISKIDX fcntl API to return location info of pre allocated block correctly.               
    * clear sector 0 and last 4k bytes of the disk before it is created as NSD to prevent accidental GPT table recovery by uEFI driver.                                                                                                         
    * Fix a race condition problem in fileset snapshot restore tool when it tries to restore extended attributes for a directory.                                                                                                               
    * When GPFS kernel module is loaded on Linux, look up dependent symbols on demand.
    * Fix stale mount handling in case of home mount issues.
    * Fixed problem in scanForFileset when sgmgr resigns while the scan is in progress.
    * Fixed problem where the GPFS daemon may terminate abnormally while performing encryption key rewrap (mmapplypolicy command with "CHANGE ENCRYPTION KEYS" rule). Fixed problem where mmrestorefs -j on an encrypted file system may resultin the file system being unmounted.
    * Prevent multiple eviction processes from running simultaneously.
    * Assert in Deque after gracePeriodThread runs.
    * Update mmchmgr to pick the best candidate as new filesystem manager when user did not specify a new manager node.
    * Fix a memory leak in the GPFS daemon associated with Events Exporter, mmpmon,and SNMP support.
    * In GPFS systems employing GPFS Native RAID, there was a situation in which failover and failback, and disk replacement operations could hang, requiring GPFS to be restarted on the node to clear the hang. Fix is extremely low risk and highly recommended.
    * Fix mmsetquota bug that returns invalid argument if a stanza contain fileset attribute along with type=FILESET.
    * Fix deadlock if fs panics during E_IO err processing.
    * mmdeldisk is blocked while phase3 recovery is doing deferred deletions. It is enough to wait until log recovery is done.
    * Ensure SQL migration is done on GSS nodes only.
    * Use maxLogReplicas instead of defaultMaxMetadataReplicas to calcuate the number of new log items when a new stripped log is added.
    * Limit PORTMAP inactive failure due to DNS busy.
    * Ensure that vdisks are automatically scrubbed periodically.
    * Initialize the fromFsP to NULL in openArch() to guard against ever calling gpfs_free_fssnaphandle() with a bad argument. Add an informative message to look for the an error log in /tmp when the file writer pipeline is broken.
    * Correct the multi-release table to avoid releasing fcntl tokens prematurely.
    * Fixed race condition between two threads trying to become metanode at the same time.
    * Ensured not to create file if it already exists for NFS when Ganesha is running.
    * Fixed a typo in in removeOpenFileFrombgdList function that caused sig 11.
    * Fix code to prevent potential GPFS daemon assert during log recovery. This problem only occurs when filesystem is 4.1 format with 4K alignment enabled (4K inode size, etc). Data replication has to be enabled and direct IO been used for write with buffer size that is not 4K aligned.
    * This update addresses the following APARs: IV61626 IV61628 IV61630 IV61655 IV61988 IV61991 IV61995 IV62043 IV62091 IV62215 IV62243 IV62418.

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2014-08-20T21:12:53Z  

    Flash (Alert)

    Abstract:
    GPFS on clusters enabled for RDMA use may experience server crashes, RDMA failures, hangs, or undetected data corruption.

    Problem Summary:
    IBM has identified a problem with GPFS versions 3.5.0.17 efix18 and efix19, 3.5.0.19 and 4.1.0.2, for clusters enabled for GPFS RDMA
    when the value configured for verbsRdmasPerNode is less than the value configured for nsdMaxWorkerThreads for any NSD server.  Under
    certain conditions, the NSD server thread may get indication that the RDMA completed successfully before the RDMA actually completes.   
    This problem may result in NSD server crashes, RDMA failures, hung NSD server threads, or undetected data corruption.

    Problem Description:
    See Problem Summary.

    Users Affected:
    Only customers running the affected levels, configured to use RDMA, with a value of verbsRdmasPerNode that is less than the value
    configured for nsdMaxWorkerThreads for the NSD servers, are vulnerable to the problem.

    To verify if the NSD servers are vulnerable to the problem, run the following command on each NSD server:

        mmfsadm test verbs config | grep -e "Max RDMAs per node"

    In the examples below:

    The value for "Max RDMAs per node max" corresponds to nsdMaxWorkerThreads.
    The value for "Max RDMAs per node curr" corresponds to verbsRdmasPerNode (which may be adjusted by GPFS).

    An example of an NSD server that is not vulnerable to the problem:

        In this example, "Max RDMAs per node max" reports the same value (512) as "Max RDMAs per node curr" (512):

        mmfsadm test verbs config | grep -e "Max RDMAs per node"
          Max RDMAs per node max              : 512
          Max RDMAs per node curr             : 512

    An example of an NSD server that is vulnerable to the problem:

        In this example, "Max RDMAs per node max" reports a value (512) that is greater than "Max RDMAs per node curr" (128):

        mmfsadm test verbs config | grep -e "Max RDMAs per node"
          Max RDMAs per node max              : 512
          Max RDMAs per node curr             : 128


    Recommendations:

    IBM recommends that customers vulnerable to the problem immediately install an efix including IV63698; for the affected levels, the relevant efixes are:

        If 3.5.0.17 efix18 or efix19 is installed, then install 3.5.0.17 efix20
        If 3.5.0.19 is installed, then install 3.5.0.19 efix5
        If 4.1.0.2 is installed, then install 4.1.0.2 efix2

    Customers vulnerable to the problem but unable to immediately apply the above fix levels should run the following command to change the
    value of verbsRdmasPerNode to equal the value of nsdMaxWorkerThreads for each NSD Server, as a workaround.  The customer may experience
    performance impacts while this workaround is in effect.

        In this example, N = the value configured for nsdMaxWorkerThreads.

        mmchconfig verbsRdmasPerNode=N

    Customers vulnerable to the problem, after applying the service or workaround above, should contact GPFS support for instructions to run
    mmfsck to detect and repair any metadata damage.

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2014-10-16T17:02:31Z  

    Flash (Alert)

    In Linux environments, GPFS may incorrectly fail writev() with EINVAL resulting in the user application failing during write

    http://www-01.ibm.com/support/docview.wss?uid=isg3T1021392


    Abstract

    IBM has identified a problem with GPFS 3.5.0.20 and GPFS 4.1.0.2 where GPFS may fail to correctly handle multiple vectors passed via the writev() system call. When a {NULL, 0} is passed as the first vector, an EINVAL error may be incorrectly returned. This would cause the user application to fail unexpectedly when writev() is called to write to a GPFS file. User data are not affected. The writev() call is most likely to have been automatically generated by the library or compiler.


    Content

    User affected: Only customers running the affected level on Linux and have applications which use the writev() system call for writes to a GPFS file.

    Note: The writev() call is most likely to have been automatically generated by the library or compiler. For example, using C++ stream class to write more than 1023 byte to a file will generate a writev() call that could fail with an EINVAL error.

    The following sample program shows an example for which a write using stream class may fail unexpectedly:

    #include<cassert>
    #include<cstdio>
    #include<fstream>

    int main (int argc, char** argv) {
    assert(argc == 2);

    char* data = new char[1000000];
    std::ofstream f(argv[1], std::ios_base::binary);
    f.write(data, 1023); // this would succeed
    perror("write call");
    f.flush();

    f.write(data, 1024); // this would fail
    perror("write call");
    f.flush();

    f.write(data, 1025); // this would fail
    perror("write call");
    f.flush();

    f.write(data, 1023); // this would succeed
    perror("write call");
    f.write(data, 1024); // this would succeed
    perror("write call");
    f.flush();

    f.write(data, 1024); // this would fail
    perror("write call");
    f.write(data, 1023); // this would succeed
    perror("write call");
    f.flush();

    f.write(data, 512); // this would succeed
    perror("write call");
    f.write(data, 512); // this would succeed
    perror("write call");
    f.write(data, 1024); // this would succeed
    perror("write call");
    f.flush();

    f.close();
    delete[] data;
    return 0;
    }

    Recommendation

    Affected V3.5 customer should contact IBM Service for an efix containing APAR IV64863; IBM plans to make this fix available in GPFS 3.5.0.21(APAR IV64863). V4.1 customers should upgrade to GPFS 4.1.0.3 (APAR IV64862) at Fix Central http://www.ibm.com/eserver/support/fixes/

     

    Updated on 2014-10-17T12:00:32Z at 2014-10-17T12:00:32Z by gpfs@us.ibm.com
  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2014-11-04T12:42:04Z  

    IBM has identified a problem with the mmrestripefs -c command

    Flash (Alert) http://www-01.ibm.com/support/docview.wss?uid=isg3T1021472

    Abstract

    IBM has identified a problem with the online replica compare/repair function invoked via the mmrestripefs -c command.


    Content

    Problem Summary:

    IBM has identified a problem with the online replica compare/repair function invoked via the mmrestripefs -c command, when invoked with any disks having status other than ready/replacement, for example with any disks in suspended state. This function may cause file system corruption, with potential loss or corruption of user files, if any replica is to be copied or moved for reasons other than replica mismatch. This problem affects customers running any PTF level of GPFS 3.5 from GPFS 3.5.0.11 through 3.5.0.20, or any level of GPFS 4.1 from GPFS 4.1.0.0 through GPFS 4.1.0.3. The function provided by the mmrestripefs -c command is disabled in PTFs 3.5.0.21 and 4.1.0.4.

    Users affected:

    This problem affects customers running any PTF level of GPFS 3.5 from GPFS 3.5.0.11 through 3.5.0.20, or any level of GPFS 4.1 from GPFS 4.1.0.0 through GPFS 4.1.0.3.

    This problem can only occur when the mmrestripefs -c command is run while there are disks with status other than ready/replacement. For a file system with data replication, the problem may only occur if none of the replicas of a data block are on a disk with status of ready/replacement.

    Users can check if they may have been affected by running the following command to determine if mmrestripefs -c was ever issued in the cluster:

    mmdsh -N all grep "mmrestripefs.*-c" /var/adm/ras/mmfs.log\*

    If the result of the grep indicates that the command has been run, contact IBM Service.

    Recommendation:

    • Avoid running mmrestripefs with the -c option until a fix is made available by IBM. A fix will be made available for GPFS V3.5 with APAR IV66437 and GPFS V4.1 with APAR IV66123.
    • Contact IBM for an efix to disable the code; APAR IV66270 for GPFS V3.5 and APAR IV66271 for GPFS V4.1.
  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2014-12-05T21:39:04Z  

    GPFS 4.1.0.5 is now available from IBM Fix Central:

    http://www-933.ibm.com/support/fixcentral

    Problems fixed in GPFS 4.1.0.5

    December 5, 2014

    * Fix an alloc cursor issue in block allocation code that may lead to spurious no space error in FPO file system.
    * Fixed code to consider flags during a disks challenge of the current cluster manager.
    * This fix applies to GNR/GSS customers that are adding additional server nodes and cluster is created using default block size.
    * Reduce number of nsdMsgRdmaPrepare messages sent.
    * Fix GSS bug related to concurrent overlapping read operations during failover/error scenarios.
    * Fixed a problem that the range size was initialized to wrong value with meta data-block size instead of the correct data block size.
    * Fix the problem that for certain configuration of FPO cluster, replicas failed to involve all LGs.
    * Redirect automatic recovery's tsrestripefs output to /var/adm/ras/restripefsOnDiskFailure.log
    * fix problem with verbsRdmasPer[Node Connection] set to a value of 1.
    * Reduce CNFS failover time on systems with large list of exports.
    * Allow disk addresses in inode 5 (Extended Attribute File) be be found by the mmfileid command.
    * Fix GSS bug related to mixed read-write workload with a working set size that matches GSS cache memory size.
    * Fixed a problem that range number was initialized to wrong value in a different meta data-block size environment anda full block write after lseek without placement installed.
    * fcntl revokes referencing a completed/freed request.
    * Hadoop File System API open() in connector throws exception as hdfs when user has no permission to access a file.
    * Fix bug where mmsetquota set the inode limit to unlimited if only changeblock quota is requested and vice versa.
    * If the user of a GSS system had previously changed the slow disk detection parameters manually to the following values: nsdRAIDDiskPerformanceMinLimitPct=50 and nsdRAIDDiskPerformanceShortTimeConstant=25000, then they can now remove the manual setting; but they don't have to remove it.
    * Fixed problem where the gpfs daemon gets sig11 when application calls GPFS_FCNTL_GET_DATABLKLOC api in mixed PPC64/X64 cluster.
    * This update addresses the following APARs: IV66617 IV66620 IV66621 IV67005 IV67006.

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2014-12-22T20:11:51Z  

    Security Bulletin: TLS padding vulnerability affects GPFS V4.1 (CVE-2014-8730)

    View  the complete Security Bulletin at  http://www.ibm.com/support/docview.wss?uid=isg3T1021845

    Summary

    Transport Layer Security (TLS) padding vulnerability via a POODLE (Padding Oracle On Downgraded Legacy Encryption) like attack affects GPFS V4.1.

    Vulnerability Details


    CVE-ID: CVE-2014-8730

    DESCRIPTION:

    Product could allow a remote attacker to obtain sensitive information, caused by the failure to check the contents of the padding bytes when using CBC cipher suites of some TLS implementations. A remote user with the ability to conduct a man-in-the-middle attack could exploit this vulnerability via a POODLE (Padding Oracle On Downgraded Legacy Encryption) like attack to decrypt sensitive information and calculate the plaintext of secure connections.

    CVSS Base Score: 4.3
    CVSS Temporal Score: See http://xforce.iss.net/xforce/xfdb/99216 for the current score
    CVSS Environmental Score*: Undefined
    CVSS Vector: (AV:N/AC:M/Au:N/C:P/I:N/A:N)

    Affected Products and Versions

    GPFS 4.1 is the only version affected. Furthermore, customers running GPFS V4.1 are affected by this vulnerability only if
    1) cipherList is set to a real cipher (i.e. not AUTHONLY) in order to authenticate and encrypt data passed between nodes/clusters; or
    2), if file-level encryption is used.

    Note: Users on 4.1 who set cipherList to AUTHONLY are not affected, given that the vulnerability compromises confidentiality, whereas AUTHONLY addresses authentication alone. These users would be at risk if at a later point they set cipherList to a real cipher; and should apply this fix before doing so. Nevertheless, consider whether you want to avoid such risk that your system becomes exposed if it becomes set to a real cipher or if file-level encryption is enabled in the future by obtaining the fix identified immediately below.

    Remediation/Fixes

    Apply GPFS 4.1.0.6 for your Edition of GPFS  available from Fix Central at http://www-933.ibm.com/support/fixcentral/
     

    Workarounds and Mitigations

    If you have a GPFS system where cipherList has been set to a real cipher (i.e. not AUTHONLY), and all nodes run GPFS V4.1, you can apply the following workaround: set nistCompliance to SP800-131A and cipherList to one of AES128-GCM-SHA256, AES256-GCM-SHA384. This will ensure that
    the cipher suite used in the course of the communication does not employ the CBC mode of operation, whose padding is the source of the vulnerability. If you want to be certain that you are not potentially exposed to this vulnerability if you make a change in your system that negates this workaround, you should apply GPFS 4.1.0.6 for your Edition of GPFS  available from Fix Central at http://www-933.ibm.com/support/fixcentral/

    Updated on 2015-02-09T17:53:16Z at 2015-02-09T17:53:16Z by gpfs@us.ibm.com
  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-02-11T15:01:27Z  

    GPFS 4.1.0.6 is now available from IBM Fix Central:                    

    http://www-933.ibm.com/support/fixcentral

    Problems fixed in GPFS 4.1.0.6

    February 5, 2015

    * Customer can change the spares of an existing DA when adding new pdisk.
    * Update code to prevent a deadlock that could occur when multiple threads try to create the same file in a directory at same time.                                                                                                         
    * When E_NOMEM occurs on a token manager during token transfer, try throw out half of all OpenFile objects, regardless of which file system they belong to. Also limit implicitly set maxStatCache(MSC) value to 10,000.                    
    * Reduce memory utilization for GPFS RDMA QPs and fix a problem with Connect-IB when verbsRdmaSend is enabled.        
    * Fixed the assert by mapping the error to permission error as expected.                                              
    * Fixed a problem where creating a clone copy file, in an encrypted file system may result in threads being blocked or in an abnormal daemon termination.                                                                                   
    * Correct tsbuhelper updateshadow command to recognize several circumstances when gencount changes even for files not yet backed up, or changed.                                                                                            
    * Fixed the problem by changing the implementation of flushpending while operations get requeued.                     
    * In FPO, use disks evenly when doing mmrestripefs -r or mmdeldisk.                                                   
    * Fix is especially recommended for GSS systems on the Power platform.                                                
    * Fixed the problem by allowing prefetch recovery to run only if all the nodes in the cluster are all at at least minReleaselevel 1410.                                                                                                     
    * Fixed the problem by clearing CA_STATE|CA_CREATE bits after local rm is done.                                       
    * Fix a defect which may cause data consistency problem if one runs mmrestripefile after reducing replica level.      
    * Fix is recommended for all GSS systems using NVRAM log tip devices.
    * When creating FPO file system, always use 'cluster' layoutMap if allowWriteAffinity is yes and layoutMap is unspecified.
    * Fixed a problem that enable file system quota would hit an assert.
    * "Waiting for nn user(s) of shared segment" messages on shutdown.
    * Fixed the problem by reading inmem attrs for dump command instead of reading it from disk.
    * Fsck reports false corruption in inode alloc map due to rounding on wrong sector size.
    * Fix code that could cause mmclone copy to fail incorrectly with EINVAL. This problem could only occur when source is file in GPFS snapshot.
    * Fix GNR bug related to a race condition that cause recovery failure during startup or failover.
    * Fix a mmdeldisk problem caused by disabled quota files placed in the system pool.
    * Fixes a problem where fsck hits signal 8 during inode validation
    * Make sure that the inode pointed to by different dentry is not the same inode to prevent possible deadlock by 2 diferent dentries pointing to the same inode and trying to lock the same inode twice.
    * Fix potential loss of IO error in linux io_getevents() call when enableLinuxReplicatedAio is enabled (3.5.0.14+). Fix a problem that returns 'error 217' improperly when do Direct IO on a replicated file which haspartial replicas on unavailable disks (4.1.0+).
    * Fix a stack corruption issue.
    * Fix a linux lookup crash issue.
    * Apply if secrecy of file metadata (pathnames, attributes and extended attributes) is a concern.
    * Revised disk selection algorithm to ensure no reuse can exist in a map entry being modified by rebuild or rebalance. Prior algorithm allowed this type of reuse and could lead to lessening the failure tolerance, trailer validation errors, and assert crashes in various GSS subsystems.
    * Ensure that GPFS uses only secure CBC padding schemes when exchanging data over TLS. This affects customers who have set cipherList to a real cipher (i.e. not AUTHONLY) in order to authenticate and encrypt data passed between nodes/clusters. This also affects customers who use file-level encryption.
    * Fix GPFS_FCNTL_GET_DATABLKLOC API, make it returns correct disk ID for data in inode files.
    * This fix is required for GSS server nodes in which multiple names assigned to the node differ only in the domain portion of the name.
    * Correct a problem leading to misleading tsgskkm system clock complaints.
    * Fix is recommended in all GNR configurations.
    * This update addresses the following APARs: IV67901 IV68007 IV68059 IV68060 IV68062 IV68064 IV68065 IV68096 IV68491 IV68493.
     

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-03-13T22:11:13Z  

    GPFS 4.1.0.7 is now available from IBM Fix Central:

    http://www-933.ibm.com/support/fixcentral

    Problems fixed in GPFS 4.1.0.7

    March 12, 2015

    * Fix a problem with block allocation code, where E_NOSPC error could be incorrectly returned after running out of disk space in one failure group. This problem only affects file systems with data replication.
    * When a GSS logTipBackup pdisk fails, mmlsrecoverygroup output will now display offline (as opposed to error)for the affected logTipBackup vdisk.
    * Fix a problem in an FPO environment which may cause auto recovery to fail to suspend some down disks.
    * Fix a problem when deleting files from independent fileset which causes unnecessary recalls when there are no snapshots.
    * Ensure to check null pointer when memory allocation fails while Ganesha is active
    * Enforce the same declustered array (DA) name for the old pdisk and the corresponding new one when replacing a pdisk with mmadpdisk --replace
    * Fix problem that may cause assertion '!ofP->destroyOnLastClose' when the file system is mounted in RO mode on some nodes and in RW more on others.
    * Enhance the handling of TSM summary output in mmbackup.
    * Fix a deadlock occurring in a heavy DIO and mmap workload on Linux
    * Fix fsck handling of ToBeDeleted inode map bits
    * Fix a problem where, when migrating a block, mmrestripefile/mmrestripefs -r should but does not comply with the file's WADFG attribute if that file belongs to a FPO pool which has WAD=0, BGF=1 attribute.
    * Fix bug in change license in CCR cluster.
    * Fix a problem that might cause auto recovery failure in FPO cluster.
    * Fix code to calculate the correct payload for a remote procedure call during file updates to the GPFS internal configuration repository. The problem may cause assertion 'bufP - msgP == len'.
    * Fix a problem that might cause no space error even if there is free disk space.
    * Protect fcntl kernel calls against non-privileged callers.
    * Exclude sensitive files from gpfs.snap collection.
    * GPFS command hardening.
    * Enable dynamically switching from cipherList=EMPTY to cipherList=AUTHONLY without bringing down the entire cluster.
    * This update addresses the following APARs: IV68842 IV69086 IV69619 IV69657 IV69702 IV69704 IV69705 IV69706 IV69707 IV70610.

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-03-16T12:51:32Z  

    Security Bulletin: IBM General Parallel File System is affected by security vulnerabilities (CVE-2015-0197, CVE-2015-0198, CVE-2015-0199)

    View the complete Security Bulletin published on 2015-03-13 at http://www-01.ibm.com/support/docview.wss?uid=isg3T1022062

    Summary

    Security vulnerabilities have been identified in current levels of GPFS V4.1, V3.5, and V3.4:
    - could allow a local attacker which only has a non-privileged account to execute programs with root privileges (CVE-2015-0197)
    - may not properly authenticate network requests and could allow an attacker to execute programs remotely with root privileges (CVE-2015-0198)
    - allows attackers to cause kernel memory corruption by issuing specific ioctl calls to a character device provided by the mmfslinux kernel module and cause a denial of service (CVE-2015-0199)

    Vulnerability Details


    CVEID: CVE-2015-0197
    DESCRIPTION: IBM General Parallel File System could allow a local attacker which only has a non-privileged account to execute programs with root privileges.
    CVSS Base Score: 6.9
    CVSS Temporal Score: See http://xforce.iss.net/xforce/xfdb/101224 for the current score
    CVSS Environmental Score*: Undefined
    CVSS Vector: (AV:L/AC:M/Au:N/C:C/I:C/A:C)

    CVEID: CVE-2015-0198
    DESCRIPTION: IBM General Parallel File System may not properly authenticate network requests and could allow an attacker to execute programs remotely with root privileges.
    CVSS Base Score: 9.3
    CVSS Temporal Score: See http://xforce.iss.net/xforce/xfdb/101225 for the current score
    CVSS Environmental Score*: Undefined
    CVSS Vector: (AV:N/AC:M/Au:N/C:C/I:C/A:C)

    CVEID: CVE-2015-0199
    DESCRIPTION: IBM General Parallel File System allows attackers to cause kernel memory corruption by issuing specific ioctl calls to a character device provided by the mmfslinux kernel module and cause a denial of service.
    CVSS Base Score: 4.7
    CVSS Temporal Score: See http://xforce.iss.net/xforce/xfdb/101226 for the current score
    CVSS Environmental Score*: Undefined
    CVSS Vector: (AV:L/AC:M/Au:N/C:N/I:N/A:C)


    Affected Products and Versions

    GPFS V4.1.0.0 thru GPFS V4.1.0.6

    GPFS V3.5.0.0 thru GPFS V3.5.0.23

    GPFSV3.4.0.0 thru GPFSV3.4.0.31

    For CVE-2015-0198, you are not affected if either of the following are true:

        the cipherList configuration variable is set to AUTHONLY or to a cipher

    or

        only trusted nodes/processes/users can initiate connections to GPFS nodes

    Remediation/Fixes

    Apply GPFS 4.1.0.7 , GPFS V3.5.0.24 ,or GPFS V3.4.0.32 as appropriate for your level of GPFS available from Fix Central at http://www-933.ibm.com/support/fixcentral/ ,

    For CVE-2015-0198, after applying the appropriate PTF, set cipherList to AUTHONLY.

    To enable AUTHONLY without shutting down the daemon on all nodes:

        Install the PTF containing the fix on all nodes in the cluster one node at a time
        Generate SSL keys by running the mmauth genkey new command. This step is not needed if CCR is in effect (GPFS 4.1 only)
        Enable AUTHONLY by running the mmauth update . -l AUTHONLY command

    If the mmauth update command fails, examine the messages, correct the problems (or shut down the daemon on the problem node) and repeat the mmauth update command above.

    Note: Applying the PTF for your level of GPFS (GPFS 4.1.0.7 , GPFSV3.5.0.24 , or GPFS V3.4.0.32,) on all nodes in the cluster will allow you to switch cipherList dynamically without shutting down the GPFS daemons across the cluster. The mitigation step below will require all nodes in the cluster to be shut down.

    If there are any nodes running GPFS 3.4 on Windows then switching the cipherList dynamically is only possible in one of the following two scenarios:

        The mmauth update command is initiated from one of the GPFS 3.4 Windows nodes

    or

        If the command is issued from another node in the cluster then GPFS must be down on all the GPFS 3.4 Windows nodes


    Workarounds and Mitigations


    For CVE-2015-0197 and CVE-2015-0199, there are no workarounds or mitigations.

    For CVE-2015-0198, set cipherList to AUTHONLY, or to a real cipher, Follow the instructions above if the PTF was installed on all the nodes in the cluster. Otherwise:

        Generate SSL keys by running the mmauth genkey new command
        Shut down the GPFS daemon on all nodes on the cluster
        Enable AUTHONLY by running mmauth update  . -l AUTHONLY

     

    Get Notified about Future Security Bulletins

    Subscribe to My Notifications to be notified of important product support alerts like this.

     

    Acknowledgement

    The vulnerabilities were reported to IBM by Florian Grunow and Felix Wilhelm of ERNW

    Updated on 2015-03-19T12:13:22Z at 2015-03-19T12:13:22Z by gpfs@us.ibm.com
  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-03-30T12:23:45Z  

    The Security Bulletin, IBM General Parallel File System V4.1 is affected by a security vulnerability (CVE-2015-1890),  is available.  See the full bulletin at http://www-01.ibm.com/support/docview.wss?uid=isg3T1022077.

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-04-17T14:52:09Z  

    Security Bulletin: Vulnerability in RC4 stream cipher affects GPFS V3.5 for Windows (CVE-2015-2808) / Enabling weak cipher suites for IBM General Parallel File System is NOT recommended

    Summary
    The RC4 "Bar Mitzvah" Attack for SSL/TLS affects OpenSSH for GPFS V3.5 for Windows. Additionally, with the recent attention to RC4 "Bar Mitzvah" Attack for SSL/TLS, this is a reminder to NOT enable weak or export-level cipher suites for IBM General Parallel File System (GPFS).

    See the complete bulletin at  http://www-01.ibm.com/support/docview.wss?uid=isg3T1022137

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-05-27T18:29:13Z  

    GPFS 4.1.0.8 is now available from IBM Fix Central:

    http://www-933.ibm.com/support/fixcentral

    Problems fixed in GPFS 4.1.0.8

    May 26, 2015

    * Correct a small vulnerability in takeover after file system manager failure during a snapshot command.
    * The code change ensures that online replica compare tool does not report false positive mismatches when the file system has suspended disks.
    * Fix an AFM recovery issue during the fileset unlink.
    * Fix a problem when determining whether copy-on-write is needed or not in the presence of snapshots. Sometimes this problem may result in spurious write operation failures (especially, but not limited to file/directory creation).
    * Fix a hang in mmrestripefs, which may also result in waiters for  "PIT_Start_MultiJob". The problem may happen if the set of nodes specified in the '-N' option to the command includes nodes which are still in the process of being started (or restarted).
    * mmcrsnapshot, mmdelsnapshot and mmfileset commands quiesce the file system before they start actual work. During that quiesce if  a thread doing file deletion of an HSM migrated file is stuck waiting for recall, since that recall could take long time due to slow tapes for example, then the mm commands could time out. This fix allows those commands to proceed while a deletion is waiting for recall.
    * Close a very small window of deadlock caused by releasing the kssLock and and calling cxiWaitEventWakeupOne when a thread not waiting for the exclusive lock is waken up and leaving the thread actually waiting for the lock sleeping and waiting.
    * Avoid a GPFS crash when running mmrestorefs or mmbackup where there are deleted filesets.
    * Enable offline fsck to validate extended attribute file
    * Fix a problem with directory lookup code that can cause FSErrInodeCorrupted error to be incorrectly issued. This could occur when lookup on '..' entry of a directory occurs at the same time as its parent is being deleted.
    * Ensure that EA migration to enable FastEA support for a file system does not assert for 'Data-in-Inode' case under certain conditions
    * Enable online fsck to fix AFM pre-destroyed inodes. Use PIT to cleanup unlinked inodes in AFM disabled fileset.
    * Update allocation code to close a small timing window that could lead to file system corruption. The problem could only occur when a GPFS client has a file system panic at the same time as the new file system manager is performing a take over after the old manager resigned.
    * Fix a signal 11 problem in multi-cluster environment when gpfs daemon relay the fsync request through metanode but the OpenFile
    got stolen on the metanode in the middle.
    * Remove confusing trace stop failed error messages on Windows.
    * The privateSubnetOverride configuration parameter may be used to allow multiple clusters on the same private subnet to communicate even when cluster names are not specified in the 'subnets' configuration parameter.
    * This fix indicates that mmfileid command will not work if there is only GPFS express edition installed.
    * Fix a workload counter used for NVRAM log tip I/O processing queues. Recommended if NVRAM log tip is in-use.
    * Potentially avoid crash on normal OS shutdown of CNFS node.
    * Fix issue where file create performance optimization was sometimes disabled unnecessarily.
    * In a cluster configured with node quorum, fix a problem where, if the cluster manager fails and the cluster is left with only the bare-minimum number of nodes to maintain node quorum, the cluster may still lose quorum.
    * Enable offline fsck to fix AFM orphan directory entries in single run
    * Fix a problem where the number of nodes allowed in a cluster is reset from 16384 to 8192.
    * This affects GSS/ESS customers who are using chdrawer to prepare to replace a failed storage enclosure drawer on an active system.
    * Correct a problem in the v4.1 release with directory listings in file systems created prior to v3.2.
    * Fix a problem that a race between log wrap and repair threads caused checksum mismatch in indirect blocks.
    * Fix a daemon crash in AFM ensuring that the setInFlight() method have positive 'numExecuted' value while calculating the average
     wait time of the messages.
    * Fix a problem on GPFS CCR cluster where GPFS commands may not work on inactive configuration servers after generated new security key.
    * Fix command poor performance on cluster that has no security key.
    * Fix a problem with DIRECT_IO write which can cause data loss when file system panic or node fails after a write passes the end o
    f file using DIRECT_IO and causes an increase in file size. The file size increase could be lost.
    * File cache filled-up with deleted objects (Linux NFS)
    * Fix a hardlink creation issue by handling the E_NEEDS_COPIED error in SFSLinkFile function for AFM files.
    * Fix handling of policy rules like ... MIGRATE ... TO some-group-pool THRESHOLD (hi,lo) ...
    * The /var/mmfs/etc/RKM.conf configuration file used to configure file encryption now supports a wider set of characters.
    * Trigger a back-off when 90% of the configured hard memory limit is hit during queuing of AFM recovery operations.
    * ESS customers, using zimon, may see GPFS daemon crashes in the performance monitoring code.
    * Add support for multiple RDMA completion threads and completion queues
    * Fix signal 11 in verbs::verbsCheckConn_i
    * Fix signal 11 in runTSPcache caused by a uninitialized variable in error paths.
    * mmauth inadvertently change cipherList to an invalid string. Changed Externals: New messages: GPFS: 6027-3708 [E] Incorrect pass
    phrase for backend '%s'. GPFS: 6027-3709 [E] Error encountered when parsing line %d: expected a new RKM backend stanza. GPFS: 6027-3710 [E] Error encountered when parsing line %d: invalid key '%s'. GPFS: 6027-3711 [E] Error encountered when parsing line %d: in
    valid key-value pair. GPFS: 6027-3712 [E] Error encountered when parsing line %d: incomplete RKM backend stanza '%s'. GPFS: 6027-3713 [E] An error was encountered when parsing line %d: duplicate key '%s'. GPFS: 6027-3714 [E] Incorrect permissions for the /var/mmfs/etc/RKM.conf configuration file. Deleted messages: GPFS: 6027-3536 [E] Incorrect passphrase '%s' for backend '%s'. GPFS: 6027-3511 [E] Error encountered when parsing '%s': expected a new RKM backend stanza. GPFS: 6027-3515 [E] Error encountered when parsing '%s': invalid key-value pair. GPFS: 6027-3514 [E] Error encountered when parsing '%s': invalid key '%s'. GPFS: 6027-3516 [E] Error encountered when parsing '%s': incomplete RKM backend stanza '%s'. GPFS: 6027-3544 [E] An error was encountered when parsing '%s': duplicate key '%s'.
    * This update addresses the following APARs: IV71419 IV71569 IV71601 IV71607 IV71613 IV71616 IV71628 IV71633 IV71634 IV71636 IV71648 IV71692 IV71815 IV72029 IV72033 IV72039 IV72042 IV72048 IV72684 IV72687 IV72688 IV72694 IV72695 IV72698 IV72700 IV72890.

    Updated on 2015-05-27T18:29:55Z at 2015-05-27T18:29:55Z by gpfs@us.ibm.com
  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-05-29T16:05:17Z  

    Flash (Alert) GPFS may incorrectly encrypt files when the AES:ECB wrapping mode is used, possibly resulting in undetected data corruption of those files

    Abstract

    In the Advanced Edition of GPFS V4.1.0 or later, under certain circumstances, files may be incorrectly encrypted when the AES:ECB wrapping mode is used, possibly resulting in undetected data corruption of those files.

    See the full Flash (Alert) at http://www-01.ibm.com/support/docview.wss?uid=isg3T1022320

    Updated on 2015-08-11T17:10:26Z at 2015-08-11T17:10:26Z by gpfs@us.ibm.com
  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-06-10T19:46:21Z  

    Flash (Alert) Errors accessing directories created since upgrading old GPFS file systems to use the new features of the V4.1 release have been reported

    Abstract

    GPFS Product Support has received reports of errors accessing directories created since upgrading old GPFS file systems to use the new features of the V4.1 release. The issue is confined to the rmdir and readdir directory access paths on GPFS file systems originally created at the 3.1, or earlier, levels.

    See the full Flash (Alert)  at http://www-01.ibm.com/support/docview.wss?uid=isg3T1022368

    Updated on 2015-08-11T17:09:05Z at 2015-08-11T17:09:05Z by gpfs@us.ibm.com
  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-08-11T17:06:00Z  

    Flash (Alert)  GPFS (IBM Spectrum Scale) V4.1 Rapid Repair function may result in undetected data corruption

    Abstract

    IBM has identified a problem with the GPFS (IBM Spectrum Scale) Rapid Repair function, which is in use by default on GPFS 4.1 format file systems wherever data replication is in use, and may result in undetected data corruption.

    See the full Flash (Alert) at either http://www.ibm.com/support/docview.wss?uid=isg3T1022582 or  http://www.ibm.com/support/docview.wss?uid=ssg1S1005352

    Updated on 2015-08-11T17:07:57Z at 2015-08-11T17:07:57Z by gpfs@us.ibm.com
  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-09-12T14:36:55Z  

    Security Bulletin: Vulnerability in OpenSSL affects IBM GPFS V4.1 and IBM Spectrum Scale V4.1.1 (CVE-2015-1788)

    Summary
    An OpenSSL denial of service vulnerability disclosed by the OpenSSL Project affects GSKit. IBM GPFS V4.1 and IBM Spectrum Scale V4.1.1 use GSKit and addressed the applicable CVE.

    See the complete bulletin at either  http://www-01.ibm.com/support/docview.wss?uid=isg3T1022618    or  http://www-01.ibm.com/support/docview.wss?uid=ssg1S1005364

     

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-09-17T18:23:16Z  

    Security Bulletin: IBM Spectrum Scale and IBM GPFS are affected by security vulnerabilities (CVE-2015-4974, CVE-2015-4981)

    Summary

    Security vulnerabilities have been identified in the current levels of IBM Spectrum Scale V4.1.1, IBM GPFS V4.1 and V3.5:
    - could allow a local non privileged attacker to execute commands with root privileges (CVE-2015-4974)
    - could allow a local non privileged attacker to read system memory contents (CVE-2015-4981)

     

    See the complete bulletin at either http://www-01.ibm.com/support/docview.wss?uid=ssg1S1005366 or http://www-01.ibm.com/support/docview.wss?uid=isg3T1022637

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2015-12-04T13:39:13Z  

    Security Bulletin: IBM Spectrum Scale V4.1.1, IBM GPFS V4.1, and IBM V3.5 for AIX are affected by a security vulnerability (CVE-2015-7403)

    Summary

    A security vulnerability has been identified in the current levels of IBM Spectrum Scale V4.1.1, IBM GPFS V4.1 and V3.5 that could allow a local attacker to cause the node they are on to crash.

     

    See the complete bulletin at either http://www-01.ibm.com/support/docview.wss?uid=ssg1S1005452 or http://www-01.ibm.com/support/docview.wss?uid=isg3T1022940

  • gpfs@us.ibm.com
    gpfs@us.ibm.com
    344 Posts

    Re: GPFS V4.1 Announcements

    ‏2016-04-04T17:12:26Z  

    Security Bulletin: IBM Spectrum Scale is affected by a security vulnerability (CVE-2016-0263)


    Summary

    A security vulnerability has been identified in the current levels of IBM Spectrum Scale V4.2, V4.1 and IBM General Parallel File System V3.5, that could allow a local user, under special circumstances, to escalate their privileges or cause a denial of service when the mmapplypolicy command is issued with certain options and syntax.

     

    See the complete bulletin at either http://www-01.ibm.com/support/docview.wss?uid=isg3T1023450 or http://www-01.ibm.com/support/docview.wss?uid=ssg1S1005708