Fix Readme
Abstract
xxx
Content
Readme file for: GPFS Readme header
Product/Component Release: 3.4.0.2
Update Name: GPFS-3.4.0.2-x86_64-Linux
Fix ID: GPFS-3.4.0.2-x86_64-Linux
Publication Date: 28 October 2010
Last modified date: 28 October 2010
Installation information
Download location
Below is a list of components, platforms, and file names that apply to this Readme file.
| Product/Component Name: | Platform: | Fix: |
|---|---|---|
| (GPFS) IBM General Parallel File System | Linux 64-bit,x86_64 RHEL Linux 64-bit,x86_64 SLES | GPFS-3.4.0.2-x86_64-Linux |
Prerequisites and co-requisites
None
Known issues
-
Problem discovered in earlier GPFS releases During internal testing, a rare but potentially serious problem has been discovered in GPFS. Under certain conditions, a read from a cached block in the GPFS pagepool may return incorrect data which is not detected by GPFS. The issue is corrected in GPFS 3.3.0.5 (APAR IZ70396) and GPFS 3.2.1.19 (APAR IZ72671). All prior versions of GPFS are affected.
The issue has been discovered during internal testing, where an MPI-IO application was employed to generate a synthetic workload. IBM is not aware of any occurrences of this issue in customer environments or under any other circumstances. Since the issue is specific to accessing cached data, it does not affect applications using DirectIO (the IO mechanism that bypasses file system cache, used primarily by databases, such as DB2® or Oracle).
This issue is limited to the following conditions:
- The workload consists of a mixture of writes and reads, to file offsets that do not fall on the GPFS file system block boundaries;
- The IO pattern is a mixture of sequential and random accesses to the same set of blocks, with the random accesses occurring on offsets not aligned on the file system block boundaries; and
- The active set of data blocks is small enough to fit entirely in the GPFS pagepool.
The issue is caused by a race between an application IO thread doing a read from a partially filled block (such a block may be created by an earlier write to an odd offset within the block), and a GPFS prefetch thread trying to convert the same block into a fully filled one, by reading in the missing data, in anticipation of a future full-block read. Due to insufficient synchronization between the two threads, the application reader thread may read data that had been partially overwritten with the content found at a different offset within the same block. The issue is transient in nature: the next read from the same location will return correct data. The issue is limited to a single node; other nodes reading from the same file would be unaffected.
Korn Shell for SLES 10
The GPFS required level of Korn Shell for SLES 10 support is version ksh-93r-12.16 and can be obtained at the following architecture-specific link.
Installation information
-
Installing a GPFS update for System x Complete the following steps to install the fix package:
- Unzip and extract the update package (< filename >.tar.gz file) with one of the following commands:
gzip -d -c < filename >.tar.gz | tar -xvf -
or
tar -xzvf < filename >.tar.gz - Verify the udpate's RPM images in the directory. Normally, the list of RPM images in this directory would be similar to one of the following:
GPFS update
gpfs.base.< update_version >.< arch >.update.rpm
gpfs.docs.< update_version >.noarch.rpm
gpfs.gpl.< update_version >.noarch.rpm
gpfs.msg.en_US.< update_version >.noarch.rpm
GPFS update with GPL licensed kernel module
gpfs.base.< update_version >.< arch >.update.rpm
gpfs.docs.< update_version >.noarch.rpm
gpfs.gpl.< update_version >.noarch.gpl.rpm
gpfs.msg.en_US.< update_version >.noarch.rpm
where
< update_version > specifies the version number of the update you downloaded, for example, 3.1.0-7 .
and
< arch > specifies the system architecture, for example i386 for 32-bit System x or x86_64 for 64-bit System x.
For specific filenames, check the Readme for the GPFS update by clicking the "View" link for the update on the Download tab.
- Follow the installation and migration instructions in your GPFS Concepts, Planning and Installation Guide.
- Unzip and extract the update package (< filename >.tar.gz file) with one of the following commands:
-
Upgrading GPFS nodes In the below instructions, node-by-node upgrade cannot be used to migrate from GPFS 2.3 to later releases. For example, upgrading from 2.3.x to 3.1.y requires complete cluster shutdown, upgrade install on all nodes and then cluster startup.
Upgrading GPFS may be accomplished by either upgrading one node in the cluster at a time or by upgrading all nodes in the cluster at once. When upgrading GPFS one node at a time, the below steps are performed on each node in the cluster in a sequential manner. When upgrading the entire cluster at once, GPFS must be shutdown on all nodes in the cluster prior to upgrading.
When upgrading nodes one at a time, you may need to plan the order of nodes to upgrade. Verify that stopping each particular machine does not cause quorum to be lost or that an NSD server might be the last server for some disks. Upgrade the quorum and manager nodes first. When upgrading the quorum nodes, upgrade the cluster manager last to avoid unnecessary cluster failover and election of new cluster managers.
- Prior to upgrading GPFS on a node, all applications that depend on GPFS (e.g. Oracle) must be stopped. Any GPFS file systems that are NFS exported must be unexported prior to unmounting GPFS file systems. If tracing was turned on, then tracing must be turned off before shutting down GPFS as well.
- Stop GPFS on the node. Verify that the GPFS daemon has terminated and that the kernel extensions have been unloaded (mmfsenv -u ). If the command mmfsenv -u reports that it cannot unload the kernel extensions because they are "busy", then the install can proceed, but the node must be rebooted after the install. By "busy" this means that some process has a "current directory" in some GPFS filesystem directory or has an open file descriptor. The freeware program lsof can identify the process and the process can then be killed. Retry mmfsenv -u and if that succeeds then a reboot of the node can be avoided.
- Upgrade GPFS using the RPM command as follows:
GPFS update
rpm -U gpfs.base-< update_version >.< arch >.update.rpm
rpm -U gpfs.docs-< update_version >.noarch.rpm
rpm -U gpfs.gpl-< update_version >.noarch.rpm
rpm -U gpfs.msg.en_US-< update_version >.noarch.rpm
GPFS update with GPL licensed kernel module
rpm -U gpfs.base-< update_version >.< arch >.update.rpm
rpm -U gpfs.docs-< update_version >.noarch.rpm
rpm -U gpfs.gpl-< update_version >.noarch.gpl.rpm
rpm -U gpfs.msg.en_US-< update_version >.noarch.rpm
- Check the GPFS FAQ to see if any additional images or patches are required for your Linux installation: General Parallel File System FAQs (GPFS FAQs)
- Recompile any GPFS portability layer modules you may have previously compiled. The recompilation and installation procedure is outlined in the following file:
/usr/lpp/mmfs/src/README
Additional information
-
Notices [October 26, 2010]
The restriction below is no longer in effect. GPFS file systems with file system format version less than 9.00 as reported by mmlsfs (V2.3 and older) can now be mounted on a GPFS V3.4 cluster safely.
[July 30, 2010]
Restrictions: File systems that currently have file system format version less than 9.00, as reported by mmlsfs (this format version corresponds to GPFS V2.3 and older) cannot be mounted on a GPFS V3.4 cluster. This restriction will be lifted in a future GPFS update.
[April 1, 2010]
During internal testing, a rare but potentially serious problem has been discovered in GPFS. Under certain conditions, a read from a cached block in the GPFS pagepool may return incorrect data which is not detected by GPFS. The issue is corrected in GPFS 3.3.0.5 (APAR IZ70396) and GPFS 3.2.1.19 (APAR IZ72671). All prior versions of GPFS are affected.
Click here for details.
[March 31, 2010]
Support for SLES 10 kernel beyond 2.6.16.60-0.58.1 has changed. GPFS 3.3 requires GPFS 3.3.0-5 and GPFS 3.2 requires 3.2.1-18.
[December 17, 2009]
Support for GPFS 3.1 has only been extended for AIX and Linux on POWER systems. Service updates will be made available for other Linux platforms, but support is not being extended.
[November 9, 2009]
GPFS 3.3.0-1 does not correctly operate with file systems created with GPFS V2.2 (or older). Such file systems can be identified by running "mmlsfs all -u": if "no" is shown for any file system, this file system uses the old format, and the use of GPFS 3.3.0-1 is not possible. GPFS 3.3.0-2 corrects this issue.
[November 7, 2008]
GPFS 3.2.1.7 contained a change that impacts TSM HSM recall process of files with stub size >0 causing hangs during recalls. To avoid this problem, the configuration parameter dmapiDataEventRetry has to be set to 'no' via command 'mmchconfig dmapiDataEventRetry=no -i '.
[September 11, 2008]
The 3.2.1-5 maintenance level had a data integrity problem using the mmap feature to write or update files on Linux and AIX. The 3.2.1-6 maintenance level is the recommended upgrade path from versions 3.2.0-0 through 3.2.1-4.
-
Package information The update images listed below and contained in the tar image with this README are maintenance packages for GPFS. The update images are a mix of normal RPM images that can be directly applied to your system.
The update images require a prior level of GPFS. Thus, the usefulness of this update is limited to installations that already have the GPFS product. Contact your IBM representative if you desire to purchase a fully installable product that does not require a prior level of GPFS.
After all RPMs are installed, you have successfully updated your GPFS product.
Update to Version:
3.4.0-2
Update from Version:
3.4.0-0 through 3.4.0-1
Update (tar file) contents:
README
changelog
gpfs.base-3.4.0-2.x86_64.update.rpm
gpfs.docs-3.4.0-2.noarch.rpm
gpfs.gpl-3.4.0-2.noarch.rpm
gpfs.msg.en_US-3.4.0-2.noarch.rpm
-
Changelog for GPFS 3.4.x Unless specifically noted otherwise, this history of problems fixed for GPFS 3.4.x applies for all supported platforms.
Problems fixed in GPFS 3.4.0.2 [October 07, 2010]
- FSCK checks log file inodes even if they have log group number set to -1.
- gpfsInodeCache slab (and cpu) usage high due to NFS anon dentry allocations.
- Fix rare occurrence of file fragment expansion happening during file sync that can cause the assert failure related to GETSUBBLOCKSPERFILEBLOC.
- If node cannot do cNFS recovery for a failed node then commit suicide so another node can do the takeover for both nodes.
- Prevent FGDL kernel memory fault caused by very narrow race condition during directory lookup.
- Fix assert related to RCTX.REPLIED, TSCOMM.C that occurs on the FS manager node if the FS manager is running GPFS release 3.2, and a release 3.3 client tries to mount the filesystem.
- Linux IO: check mm_struct before pinning pages.
- Improve performance of stat operations on Linux under certain multi-node access patterns.
- Fixed an incompatibility between GPFS for Windows and the Interop Systems software bundles. This incompatibility caused Interop Systems bundle installation failure.
- Fix hang between node join thread and events exporter request handler thread.
- This update addresses the following APARs: IZ81230 IZ83798 IZ84008 IZ84016 IZ84039 IZ84041 IZ84045 IZ84145 IZ84161.
Was this topic helpful?
Document Information
Modified date:
25 June 2021
UID
isg400000330
