Description: zFS always reads and writes data in
8K blocks. However, starting in z/OS V1R13, zFS stores data either
inline or in 8K blocks. (Inline data is a file that is smaller than
53 bytes and is stored in the file's metadata.) Unlike in previous
releases, z/OS V1R13 zFS no longer stores data in 1K fragments. z/OS
V1R13 zFS can read data stored in fragments; however, when the data
is updated, it is moved into 8K blocks. Previously, zFS could store
data in 1K fragments (contained in an 8K block). This meant that multiple
small files could be stored in a single 8K block.
Because data is no longer stored in fragments, zFS might need more
DASD storage than was required in previous releases to store the same
amount of data. More storage may also be needed if z/OS V1R13 zFS
is in a mixed-release sysplex and becomes the zFS owning system of
a file system.
Because data is no longer stored in fragments, zFS R13 might need
more DASD storage than was required in previous releases to store
the same amount of data. More storage may also be needed if zFS R13
is in a mixed-release sysplex and becomes the zFS owning system of
a file system.
- Scenario 1: If every file in the file system is 1K or less,
z/OS V1R13 R13 could require up to four times the DASD storage as
was needed in previous releases.
- Scenario 2: Because HFS uses 4K blocks to store data and
zFS uses 8K blocks, if every file in the file system were 4K or less,
z/OS V1R13 zFS could require up to twice as much DASD space to store
these files than HFS did.
- Scenario 3: If the file system contains 1000 files that
are 1K in size, z/OS V1R13 zFS could take a maximum of 10 cylinders
more than zFS in previous releases.
Typically, however, any increase in the DASD storage used by zFS
will be negligible. For example, the z/OS V1R13 version root file
system copied using z/OS V1R13 zFS takes approximately 2% more space
than the same file system copied using z/OS V1R12 zFS. Note that z/OS
V1R13 zFS packs multiple ACLs and symbolic links into an 8K block
which previous releases did not do. To minimize the chance of application
failure due to running out of DASD storage in newly mounted file systems,
the default value for the IOEFSPRM option aggrgrow is changed from Off to On.
Element or feature: |
z/OS Distributed File Service. |
When change was introduced: |
z/OS V1R13. |
Applies to migration from: |
z/OS V1R12. |
Timing: |
Before installing V2R1. |
Is the migration action required? |
Yes, if you will be using z/OS V2R1 zFS to create
new zFS file systems or update data in existing file systems, where
the file system contains many small files. This action is also required
if you have not specified the zFS aggrgrow option in your IOEFSPRM
configuration options file, and want to use the old default of Off. |
Target system hardware requirements: |
None. |
Target system software requirements: |
None. |
Other system (coexistence or fallback) requirements: |
None. |
Restrictions: |
None. |
System impacts: |
z/OS V2R1 can use more DASD storage for data
than previous releases required. The amount of DASD storage depends
on file sizes and on ACL and symbolic link usage. In general, the
more small files in the file system, the more likely it is that a
file system created or updated with z/OS V2R1 will require more DASD
storage than previous releases. |
Related IBM Health Checker for z/OS check: |
None. |
Steps to take: Perform the following steps, as appropriate
for your installation.
For all zFS file systems
- If you have not specified the zFS aggrgrow option in
your IOEFSPRM configuration options file, recognize that the default
is changing in z/OS V1R13 from aggrgrow=off to aggrgrow=on. This means
that by default, a zFS read-write mounted file system that is mounted
on z/OS V2R1 will attempt to dynamically extend when it runs out of
space if a secondary allocation size is specified and there is space
on the volume(s).
- If you do not want that default change and you want it to act
as in prior releases, specify aggrgrow=off in your IOEFSPRM configuration
options file so that it takes effect on the next IPL. You can dynamically
change the aggrgrow option to off with the zfsadm config
-aggrgrow off command. You can see your current value
for aggrgow with the zfsadm configquery -aggrgrow command.
For new zFS file systems
- Increase the estimated size of a new zFS file system, if you know
that many files in the file system will be small.
- Mount zFS read-write file systems and allow them to dynamically
extend; if more DASD space is needed, applications will not fail because
the file systems are out of storage.
To do so, mount the file
systems with the AGGRGROW mount option or use the default aggrgrow=on
IOEFSPRM configuration option. The data set must have a non-zero secondary
allocation size and there must be space on the volume to allow dynamic
extension.
For existing zFS file systems
- Use the scan for small files utility (zfsspace) to determine if
an existing file system needs more DASD storage. For a mounted zFS
file system, the utility shows the number of small files (1K or less),
if a secondary allocation is specified, and if aggrgrow=on is specified.You can determine how many files you have in a file system
that are less than or equal to 1K in size by using the following shell
command:
find <mountpoint> -size -3 -type f -xdev | wc -l
The
zfsspace utility can be downloaded from ftp://public.dhe.ibm.com/s390/zos/tools/zfsspace/zfsspace.txt.
- If a file system has a secondary allocation size and is mounted
with the AGGRGROW mount option, allow it to dynamically extend to
minimize the potential failure due to lack of storage. If there are
insufficient candidate volumes, also consider adding volumes by using
the IDCAMS ALTER command with the ADDVOLUMES option. Generally, after
adding volumes, a remount samemode is required to have them take effect.
- If a file system is not enabled to dynamically extend, consider
explicitly growing the file system using the z/OS UNIX zfsadm
grow command. This is especially important if the file
system contains many small files that will be updated.
- If you expect a file system to grow larger than 4GB (about 5825
3390 cylinders) and it is not SMS-managed with extended addressability,
you will need to copy it to an SMS-managed zFS data set with a data
class that includes extended addressability. To do so, use the pax command.
If a zFS aggregate is to be larger than 4GB, it must be SMS-managed
with a data class that includes extended addressability. Beginning
in z/OS V2R1, you can have non-SMS managed VSAM Linear Data Sets that
are larger than 4GB, with the Extended Addressability attribute.
Reference information: Refer to the following documentation
for more information about the migration steps.