zPET - IBM Z and z/OS Platform Evaluation and Test - Group home

z/OS V2R2 zFS 64-bit Larger Cache Sizes

  

In z/OS V2R2, zFS caches were moved above the 2GB bar into 64-bit storage. This allows for larger zFS caches. Some zFS configuration variables were changed to support the larger size values.

 

Note: To obtain above the 2GB bar zFS storage information, the zfsadm -storage report and the console query storage report now contain information about storage usage above the 2GB bar.

 

In z/OS V2R2, the metaback cache is not a separate cache. The metaback_cache_size is combined with the meta_cache_size into one single metadata cache. To avoid confusion and keep the configuration free of outdated specifications, consideration should be made to update the zFS configuration file to combine these two options and remove the metaback_cache_size definition from the file.

 

zFS is providing a check (ZFS_VERIFY_CACHESIZE) to indicate if the metadata cache size is less than the calculated default metadata cache size. zFS also is providing a check (ZFS_CACHE_REMOVAL) to indicate whether the metaback_cache_size configuration option is specified.

 

The user_cache_size zFS configuration definition specifies the size of the cache that is used to contain file data. zFS is providing, in the ZFS_VERIFY_CACHESIZE health check, an indication if the user cache size is less than the default user cache size.

 

Before IPLing with z/OS V2R2, we defined the following in our zFS parameters.

 

user_cache_size=256m

meta_cache_size=256m

 

After the IPL on z/OS V2R2, we received the following health check indications.

 

From the syslog or operlog:

HZS0001I CHECK(IBMZFS,ZFS_VERIFY_CACHESIZE): 

IOEZH0044E zFS is running with a user-defined meta_cache_size of

256M and a default metaback_cache_size of 0M.

The default meta_cache_size is 2148M.

The sum of the current two cache sizes 256M is less than the default meta_cache_size 2148M.

 

HZS0001I CHECK(IBMZFS,ZFS_VERIFY_CACHESIZE):

IOEZH0045E zFS is running with a user-defined user_cache_size of 256M

That is less than the default size of 2048M.

 

From the SDSF health check screen we see an exception:

NP  NAME                 CheckOwner State           Status

    ZFS_CACHE_REMOVALS   IBMZFS     ACTIVE(ENABLED) SUCCESSFUL

    ZFS_VERIFY_CACHESIZE IBMZFS     ACTIVE(ENABLED) EXCEPTION-LOW

 

We did not have certain zFS configuration variables defined that the ZFS_CACHE_REMOVALS check was targeting. For the ZFS_CACHE_REMOVALS check we received similar to the following.

 

CHECK(IBMZFS,ZFS_CACHE_REMOVALS)

SYSPLEX:   UTCPLXJ8 SYSTEM: JC0

START TIME: 05/13/2015 12:59:24.342281

CHECK DATE: 20141201 CHECK SEVERITY: LOW

 

IOEZH0062I zFS configuration variable metaback_cache_size is not

specified.

 

IOEZH0063I zFS configuration variable tran_cache_size is not specified.

 

IOEZH0064I zFS configuration variable client_cache_size is not

specified.

 

END TIME: 05/13/2015 12:59:24.345509 STATUS: SUCCESSFUL

 

We did have certain zFS configuration variables defined that the ZFS_VERIFY_CACHESIZE check was targeting. For the ZFS_VERIFY_CACHESIZE check we received similar to the following:

 

CHECK(IBMZFS,ZFS_VERIFY_CACHESIZE)

SYSPLEX:   UTCPLXJ8 SYSTEM: JC0

START TIME: 05/13/2015 12:59:24.342156

CHECK DATE: 20150205 CHECK SEVERITY: LOW

 

* Low Severity Exception *

 

IOEZH0044E zFS is running with a user-defined meta_cache_size of

256M and a default metaback_cache_size of 0M.

The default meta_cache_size is 2148M.

The sum of the current two cache sizes 256M is less than the default meta_cache_size 2148M.

 

 Explanation: zFS is running with the indicated metadata cache size

and metadata backing cache size. The sum of the current two cache sizes is less than the default (or user-overridden) metadata cache size. Running with a very small metadata cache might affect zFS performance. See the topic on Performance tuning in z/OS Distributed File Service zFS Administration to determine whether the current settings might impact zFS performance.

 

If the user-specified metadata cache size is less than 1M (minimum) or more than 64G (maximum), zFS replaces it with the minimum or maximum value and displays the current metadata cache size with the new value. If the user-specified metadata backing cache size is less than 1M (minimum) or more than 2048M (maximum), zFS replaces it with the minimum or maximum value and displays the current metadata backing cache size with the new value.

 

 System Action: The system continues processing.

 

 Operator Response: Report this problem to the system programmer.

 

 System Programmer Response: For user-specified cache size, check the

   value that is defined in meta_cache_size or metaback_cache_size

   option in the IOEFSPRM configuration file and verify that it is

   within the valid range. Depending on the performance analysis, if

   the current settings do not perform as well as the default size,

   specify meta_cache_size with the default size in your IOEFSPRM

   configuration file and restart zFS. The meta_cache_size

   configuration option can also be dynamically updated using the

   zfsadm config command.

 

   Otherwise, specify the current meta cache size with the user

   override check parameter meta_cache on the PARM statement (for

   either HZSPRMxx or MODIFY hzsproc) in order to make the check

   succeed.

 

 Problem Determination: N/A

 

 Source: z/OS File System

 

 Reference Documentation: See the topic on "Performance tuning" or

   "IOEFSPRM" in z/OS Distributed File Service zFS Administration

 

 Automation: N/A

 

 Check Reason: Verifies whether the default cache sizes are used

 

* Low Severity Exception *

 

IOEZH0045E zFS is running with a user-defined user_cache_size of 256M

That is less than the default size of 2048M.

 

 Explanation: zFS is running with the indicated user cache size and the size is less than the default or user-overridden user cache size. Running with a very small user cache size could affect zFS performance. See the topic on Performance tuning in z/OS Distributed File Service zFS Administration to determine whether the current setting might affect zFS performance.

 

   If the user-specified user cache size is less than 10M (minimum) or

   more than 65536M (maximum), zFS replaces it with the minimum or

   maximum value and displays the current size with the new value.

 

 System Action: The system continues processing.

 

 Operator Response: Report this problem to the system programmer.

 

 System Programmer Response: For a user-specified cache size, check the value that is defined in the user_cache_size option in the IOEFSPRM configuration file and verify that it is within the valid range. Depending on the performance analysis, if the current setting does not perform as well as the default value, specify user_cache_size with the default size in your IOEFSPRM configuration file and restart zFS. The user_cache_size configuration option can also be dynamically updated using the zfsadm config command.

 

   Otherwise, specify the current user cache size with the user

   override check parameter user_cache on the PARM statement (for

   either HZSPRMxx or MODIFY hzsproc) in order to make the check

   succeed.

 

 Problem Determination: N/A

 

 Source: z/OS File System

 

 Reference Documentation: See the topic on "Performance tuning" or

   "IOEFSPRM" in z/OS Distributed File Service zFS Administration

 

 Automation: N/A

 

 Check Reason: Verifies whether the default cache sizes are used

 

END TIME: 05/13/2015 12:59:24.350115 STATUS: EXCEPTION-LOW

 

We made the following modifications to the zFS configuration parameters based on the health check indications. We set them to the defaults at this time.

 

For that particular system, we modified the zFS configuration file to define the following. This will be used for any future IPLs on z/OS V2R2.

 

user_cache_size=2048M

meta_cache_size=2148M

 

We also made the changes dynamically on that particular system with the following commands.

 

/bin/zfsadm config -meta_cache_size 2148M

/bin/zfsadm config -user_cache_size 2048M

 

After the modifications and rerunning the checks, the SDSF health checks showed successful for that system.

 

NP  NAME                 CheckOwner State           Status

    ZFS_CACHE_REMOVALS   IBMZFS     ACTIVE(ENABLED) SUCCESSFUL

    ZFS_VERIFY_CACHESIZE IBMZFS     ACTIVE(ENABLED) SUCCESSFUL

Author: Alfred Lease