|
To present all the benefits and details of zFS administration,
the following concepts and terminology are introduced:
- Attach
- When
a zFS file system is mounted, the data set is also attached. Attach
means that zFS allocates and opens the data set. This attach occurs
the first time a file system contained in the data set is mounted.
A
zFS data set can also be attached (by issuing the zfsadm
attach command) without mounting it. However, there are
many restrictions in this case. For example, the zFS data set would
not be available to z/OS® UNIX applications because it was
not mounted. In a shared file system environment, the zFS data set
would be detached, not moved, if the system went down or zFS internally
restarted. You might attach a zFS data set to explicitly grow it (zfsadm
grow) or to determine the free space available (zfsadm
aggrinfo). You can also delete a .bak file system (zfsadm
delete) from an attached zFS data set. Because
zFS has removed support for .bak (clone) file systems, you should
delete any .bak file systems before mounting the primary file system. You
must detach the zFS data set (zfsadm detach) before
mounting it.
- Catch-up mount
- When a file system mount is successful on a system
in a shared file system environment, z/OS UNIX automatically issues a corresponding
local mount, which is called a catch-up mount, to every
other system's PFS for a zFS read/write mounted file system that is
mounted RWSHARE or for a read-only mounted file system.
If the
corresponding local mount is successful, z/OS UNIX does
not function ship from that system to the z/OS UNIX owning
system when that file system is accessed. Rather, the file request
is sent directly to the local PFS. This is sometimes referred to as
Client=N, as indicated by the output of the D OMVS,F operator command,
or df -v shell command. If the corresponding local
mount is unsuccessful (for instance, DASD is not accessible from that
system), z/OS UNIX function ships requests to the z/OS UNIX owning
system when that file system is accessed (message BPXF221I might be
issued). This is sometimes referred to as Client=Y, as indicated by
the output of the D OMVS,F or df -v commands. For
examples of the command output, see Determining the file system owner.
- File system ownership
- IBM® defines a file system owner
as the system that coordinates sysplex activity for a particular file
system. In a shared file system environment, there is also the concept
of file system ownership. The owner of a file system
is the first system that processes the mount. This system always accesses
the file system locally; that is, the system does not access the file
system through a remote system. Other non-owning systems in the sysplex
access the file system either locally or through the remote owning
system, depending on the PFS and the mount mode.
The file system
owner is the system to which file requests are forwarded when the
file system is mounted non-sysplex aware. Having the appropriate owner
is important for performance when the file system is mounted read/write
and non-sysplex aware. The term z/OS UNIX file system owner refers
to the owner of the zFS file system as z/OS UNIX recognizes it. This is typically
the system where the file system is first mounted, but it can differ
from the zFS file system owner (see zFS
file system owner). - zFS file system owner
- zFS has its own concept of file system
ownership, called the zFS file system owner. This is
also typically the system where the file system is first mounted in
a sysplex-aware environment. File requests to sysplex-aware file systems
are sent directly to the local zFS PFS, rather than being forwarded
to the z/OS UNIX file system owner. This concept is shown
in Figure 1. The local zFS PFS forwards
the request to the zFS file system owner, if necessary. The z/OS UNIX file
system owner can be different from the zFS file system owner. (In
reality, zFS owns aggregates. Generally, we simplify this to say zFS
file system owner because
zFS compatibility mode aggregates only have a single file system.)
- z/OS UNIX file system owner
- The term z/OS UNIX file
system owner refers to the owner of the zFS file system as z/OS UNIX knows
it. This is typically the system where the file system is first mounted.
For details about sysplex considerations and the shared
file system environment, see Determining the file system owner and Using zFS in a shared file system environment.
Figure 1. z/OS UNIX and
zFS file system ownership
When a file system
is not sysplex-aware (that is, mounted as NORWSHARE), file requests
are function shipped by z/OS UNIX to the z/OS UNIX file system owner,
and then to the PFS. When a file system is sysplex-aware (that is,
mounted as RWSHARE), file requests are sent directly to the local
zFS PFS and then function shipped by zFS to the zFS file system owner.
- Function shipping
- Function shipping means that a request
is forwarded to the owning system and the response is returned to
the requestor through XCF communications.
- Local mount
- A local mount means that z/OS UNIX issues
a successful mount to the local PFS, which in this case is zFS. z/OS UNIX does
this when either the file system is mounted sysplex-aware for that
mode (read/write or read-only) or the system is the z/OS UNIX owner.
When a file system is locally mounted on the system, z/OS UNIX does
not function ship requests to the z/OS UNIX owning system. To determine
if a system has a local mount, see Determining the file system owner.
- Non-sysplex aware (sysplex-unaware)
- A file system is non-sysplex aware (or sysplex-unaware)
if the PFS (Physical File System) supporting that file system requires
it to be accessed through the remote owning system from all other
systems in a sysplex (allowing only one connection for update at a
time) for a particular mode (read-only or read/write). The system
that connects to the file system is called the file system owner.
Other system's access is provided through XCF communication with the
file system owner. For a non-sysplex aware zFS file system, file requests
for read/write mounted file systems are function shipped to the owning
system by z/OS UNIX. The owning system is the only system where
the file system is locally mounted and the only system that does I/O
to the file system. See zFS file system
owner and z/OS UNIX file
system owner.
- Read-only file system
- A file system that is mounted for
read-only access is a read-only file system.
- read/write file system
- A file system that is mounted for
read and write access is a read/write file system.
- Shared file system environment
- The shared file
system environment refers to a sysplex that has a BPXPRMxx
specification of SYSPLEX(YES).
- Sysplex
- The
term sysplex as it applies to zFS, means a sysplex that
supports the z/OS UNIX shared file system environment. That is,
a sysplex that has a BPXPRMxx specification of SYSPLEX(YES).
- Sysplex-aware
- Pertains
to a physical file system that handles file requests for mounted file
systems locally instead of shipping function requests through z/OS UNIX.
- Sysplex-aware PFS
- A physical file system (PFS), for example
zFS, is sysplex-aware or non-sysplex aware for a particular mount
mode (read-only or read/write) in a shared file system environment.
When it is sysplex-aware,
the PFS is capable of handling a local mount on the system that is
not the z/OS UNIX owning system. The PFS that is sysplex-aware
can avoid z/OS UNIX function shipping for that mode. Both HFS
and zFS file systems are always sysplex-aware for read-only mounts.
HFS is always non-sysplex aware for read/write mounts and always results
in z/OS UNIX function shipping from systems that are
not the z/OS UNIX owning system.
As of z/OS V1R13,
zFS always runs sysplex-aware (sysplex=filesys) in a shared file system
environment. Individual file systems can be non-sysplex aware or sysplex-aware,
with the default being non-sysplex aware.
- Sysplex-aware file system
- A file system can be mounted sysplex-aware
or non-sysplex aware. When a file system is mounted sysplex-aware,
it means that the file system is locally mounted on every system (when
the PFS is capable of handling a local mount on every system - that
is, the PFS is running sysplex-aware) and therefore, file requests
are handled by the local PFS. All read-only mounted file systems are
always mounted sysplex-aware (see Figure 1).
HFS read/write mounted file systems are always mounted non-sysplex
aware. This means that file requests from non z/OS UNIX owning
systems are always function shipped by z/OS UNIX to the z/OS UNIX owning
system where the file system is locally mounted and the I/O is actually
done.
Beginning with z/OS V1R11,
zFS read/write mounted file systems can be mounted sysplex-aware (see Figure 1 and Figure 1) when zFS is configured as
sysplex-aware (zFS IOEFSPRM option sysplex=on or zFS IOEFSPRM option
sysplex=filesys). Beginning with z/OS V1R13,
zFS in a shared file system environment is always sysplex=filesys.
- zFS aggregate
- The data set that
contains a zFS file system is called a zFS aggregate. A zFS aggregate
is a Virtual Storage Access Method (VSAM) linear data set. After the
zFS aggregate is defined and formatted, a zFS file system is created
in the aggregate. In addition to the file system, a zFS aggregate
contains a log file and a bitmap describing the free space. A zFS
aggregate has a single read/write zFS file system and is sometimes
called a compatibility mode aggregate. Compatibility mode aggregates
are similar to HFS.
Restriction: zFS does not support
the use of a striped VSAM linear data set as a zFS aggregate. If you
attempt to mount a compatibility mode file system that had previously
been formatted and is a striped VSAM linear data set, it will only
mount as read-only. zFS does not support a zFS aggregate that has
guaranteed space.
- zFS file system
- Refers
to a hierarchical organization of files and directories that has a
root directory and can be mounted into the z/OS UNIX hierarchy.
zFS file systems are located on DASD.
- zFS Physical File System (PFS)
- Refers
to the code that runs in the zFS address space. The zFS PFS can handle
many users accessing many zFS file systems at the same time.
|