File system
The SAP system requires shared access to some directories
(global, profile, trans), while shared access is optional for other directories (for example, the
directory containing the executable programs). We highly recommended to use NFS
also for the shared exe directory/directories. In addition, failover needs to
be considered if connections to the directories are disrupted.
Create the shared directory access between z/OS® systems with zFS file systems.
In a heterogeneous environment, remote servers (such as Linux®, AIX® or Windows application servers) need access to the SAP directories as well.
For UNIX or Linux systems, NFS is needed to share files. As a result, the availability of the file systems together with the NFS server becomes a critical factor. In this document, it is assumed that the critical file systems reside on z/OS.
High Availability and performance improvements with zFS sysplex-aware
zFS provides additional performance improvements when running sysplex-aware in a shared file system environment. Also, in order to support read/write mounted file systems that are accessed as sysplex-aware, zFS automatically moves zFS ownership of a zFS file system to the system that has the most read/write activity.
Define NFS exported zFS shared file systems as sysplex-aware. For details see z/OS File System Administration.
Performance improvements and space efficiency with z/OS file system aggregate version 1.5 and extended (v5) directory
Beginning with z/OS V2R1, zFS provides an optional, new format zFS aggregate, the version 1.5 aggregate. One purpose of the version 1.5 aggregate is to support a new directory format (extended v5 directory) that will scale better when the directory contains many names (over 10,000).
Extended (v5) directories provide the following benefits:
- They can support larger directories with performance.
- They store names more efficiently than v4 directories.
- When names are removed from extended (v5) directories, the space is reclaimed, when possible.
Earlier z/OS releases cannot access extended (v5) directories or version 1.5 aggregates. In order to control the transition to the new format directories, extended (v5) directories can only be created in version 1.5 aggregates.
For more information on v1.5 aggregates and v5 directories, refer to z/OS File System Administration.
Migration prerequisite: All members of the sysplex must be z/OS V2R1 or higher.
- Identify the file system aggregates that you would like to convert to v1.5 aggregates. Display
the actual version of an aggregate using the command:
zfsadm aggrinfo -aggregate omvs.zfs.sapmnt -long - Identify the directories in those file systems that you would like to convert to v5 directories.
Display the actual version of a directory using the command:
zfsadm fileinfo -path /sapmnt
- Unmount the file system using the command:
/usr/sbin/unmount /sapmnt - Edit the BPX PARMLIB member that holds the file system mount commands and insert
PARM('CONVERTTO5')like in this sample:MOUNT FILESYSTEM('OMVS.ZFS.SAPMNT') MOUNTPOINT('/sapmnt') TYPE(ZFS) PARM('CONVERTTOV5') MODE(RDWR) AUTOMOVE
With the first mount, the file system will be converted to v1.5 aggregate. Also, all directories including sub-directories in the file system will be converted to v5 directories.
Migration procedure without downtime: Unmount of file systems is not required.
- Display actual version of an aggregate:
zfsadm aggrinfo -aggregate omvs.zfs.sapmnt -long - Convert this aggregate to v1.5:
zfsadm convert -aggrversion OMVS.ZFS.SAPMNTMessage IOEZ00810I indicates the successful change:
Successfully changed aggregate OMVS.ZFS.SAPMNT to version 1.5. - Display actual version of a directory:
zfsadm fileinfo -path /sapmnt - Convert a single large directory to v5:
zfsadm convert -path /sapmntMessage IOEZ00791I indicates the successful conversion:
Successfully converted directory /sapmnt to version 5 format.
All newly created file system aggregates will be v1.5 aggregates (change the default):
- Verify actual setting:
zfsadm configquery -format_aggrversion - Configure v1.5:
zfsadm config -format_aggrversion 5 - Verify the new setting:
zfsadm configquery -format_aggrversion
Failover of the NFS server
NFS clients try to reconnect automatically if a connection is disrupted. When the NFS server fails, the NFS server can be restarted on the same system. If this is not possible, it is restarted on a second system.
To allow this failover to be transparent to applications on the NFS client side, the following conditions must be met:
- A dynamic VIPA is defined that moves with the NFS server.
- The NFS clients must use the dynamic VIPA as host name in their mount command.
- The physical file systems that are exported by the NFS server must be accessible on all systems where the NFS server might be possibly started. This is another reason for using zFS shared file system sysplex-aware support.
The failover scenario is shown in Figure 1 and Figure 2. Note that the NFS VIPA is different from the VIPA of SCS. So they can be moved independently of each other.

