File system

The SAP system requires shared access to some directories (global, profile, trans), while shared access is optional for other directories (for example, the directory containing the executable programs). We highly recommended to use NFS also for the shared exe directory/directories. In addition, failover needs to be considered if connections to the directories are disrupted.

Create the shared directory access between z/OS® systems with zFS file systems.

In a heterogeneous environment, remote servers (such as Linux®, AIX® or Windows application servers) need access to the SAP directories as well.

For UNIX or Linux systems, NFS is needed to share files. As a result, the availability of the file systems together with the NFS server becomes a critical factor. In this document, it is assumed that the critical file systems reside on z/OS.

Important: File access is not transactional. There is no commit or rollback logic. In case of a system failure there is no guarantee that the last written data has been stored on disk. Therefore, with NFS version 3 (NFSv3), the Network Lock Manager (NLM) must be used to guarantee transactional file access. With NFS version 4, the locking function is automatically enabled. The methods described in the next section of this topic ensure that the NFS file systems become available again, quickly and automatically. In most cases this is transparent to the SAP system. See also Application design.

High Availability and performance improvements with zFS sysplex-aware

zFS provides additional performance improvements when running sysplex-aware in a shared file system environment. Also, in order to support read/write mounted file systems that are accessed as sysplex-aware, zFS automatically moves zFS ownership of a zFS file system to the system that has the most read/write activity.

Define NFS exported zFS shared file systems as sysplex-aware. For details see z/OS File System Administration.

Performance improvements and space efficiency with z/OS file system aggregate version 1.5 and extended (v5) directory

Beginning with z/OS V2R1, zFS provides an optional, new format zFS aggregate, the version 1.5 aggregate. One purpose of the version 1.5 aggregate is to support a new directory format (extended v5 directory) that will scale better when the directory contains many names (over 10,000).

Extended (v5) directories provide the following benefits:

  • They can support larger directories with performance.
  • They store names more efficiently than v4 directories.
  • When names are removed from extended (v5) directories, the space is reclaimed, when possible.

Earlier z/OS releases cannot access extended (v5) directories or version 1.5 aggregates. In order to control the transition to the new format directories, extended (v5) directories can only be created in version 1.5 aggregates.

Note: You should only create or change to a version 1.5 aggregate if you are sure you will not run z/OS releases prior to V2R1 in your sysplex. To create or change to a version 1.5 aggregate requires explicit action. By default, aggregates created in z/OS V2R1 are version 1.4 aggregates. Over time, it is likely that the default changes to version 1.5 aggregates.

For more information on v1.5 aggregates and v5 directories, refer to z/OS File System Administration.

Migration prerequisite: All members of the sysplex must be z/OS V2R1 or higher.

Migration considerations:
  • Identify the file system aggregates that you would like to convert to v1.5 aggregates. Display the actual version of an aggregate using the command:
    zfsadm aggrinfo -aggregate omvs.zfs.sapmnt -long
  • Identify the directories in those file systems that you would like to convert to v5 directories. Display the actual version of a directory using the command:
    zfsadm fileinfo -path /sapmnt
Migration procedure with downtime: This variation is only applicable if the file system can stay unmounted for a moment.
Note: In a shared FS configuration, the file system must stay unmounted on every system.
  1. Unmount the file system using the command:
    /usr/sbin/unmount /sapmnt
  2. Edit the BPX PARMLIB member that holds the file system mount commands and insert PARM('CONVERTTO5') like in this sample:
    
    MOUNT FILESYSTEM('OMVS.ZFS.SAPMNT')
          MOUNTPOINT('/sapmnt')
          TYPE(ZFS)
          PARM('CONVERTTOV5')
          MODE(RDWR)
          AUTOMOVE
    

With the first mount, the file system will be converted to v1.5 aggregate. Also, all directories including sub-directories in the file system will be converted to v5 directories.

Migration procedure without downtime: Unmount of file systems is not required.

  1. Display actual version of an aggregate:
    zfsadm aggrinfo -aggregate omvs.zfs.sapmnt -long 
  2. Convert this aggregate to v1.5:
    zfsadm convert -aggrversion OMVS.ZFS.SAPMNT

    Message IOEZ00810I indicates the successful change:

    Successfully changed aggregate OMVS.ZFS.SAPMNT to version 1.5.
  3. Display actual version of a directory:
    zfsadm fileinfo -path /sapmnt
  4. Convert a single large directory to v5:
    zfsadm convert -path /sapmnt

    Message IOEZ00791I indicates the successful conversion:

    Successfully converted directory /sapmnt to version 5 format.
Note: There is no known option to include sub-directories. To convert a large number of directories, you might consider creating a script.

All newly created file system aggregates will be v1.5 aggregates (change the default):

  1. Verify actual setting:
    zfsadm configquery -format_aggrversion
  2. Configure v1.5:
    zfsadm config -format_aggrversion 5
  3. Verify the new setting:
    zfsadm configquery -format_aggrversion

Failover of the NFS server

NFS clients try to reconnect automatically if a connection is disrupted. When the NFS server fails, the NFS server can be restarted on the same system. If this is not possible, it is restarted on a second system.

To allow this failover to be transparent to applications on the NFS client side, the following conditions must be met:

  • A dynamic VIPA is defined that moves with the NFS server.
  • The NFS clients must use the dynamic VIPA as host name in their mount command.
  • The physical file systems that are exported by the NFS server must be accessible on all systems where the NFS server might be possibly started. This is another reason for using zFS shared file system sysplex-aware support.

The failover scenario is shown in Figure 1 and Figure 2. Note that the NFS VIPA is different from the VIPA of SCS. So they can be moved independently of each other.

Figure 1. Initial NFS client/server configuration
Graphic showing initial NFS client/server configuration
Figure 2. Failover of the NFS server
Graphic showing failover of the NFS server