Zoning and portsets
Ensure that you are familiar with these zoning details and host zones. More details are included in the SAN configuration and zoning rules summary.
- Creating, modifying, and deleting zones within the Fibre Channel fabric
- Identifying optimal zone definitions to achieve optimal multi‑path configurations for host servers
- Managing zoning in accordance with SAN best practices
- Backing up and restoring zone configurations to ensure reliability and recoverability
- Automatically adapting zone definitions by tracking changes in host or storage connectivity
- With
auto zoning feature, storage target ports manage zoning operations on the FC fabric. Auto zoning
creates peer zones, which provide an optimized zoning model:
- Peer zone contents
-
- One principal device is typically a storage array target port.
- Multiple non-principal devices are typically host initiator ports.
- Peer zone communication rules
-
- The principal device can communicate with all nonprincipal devices, and non-principal devices can communicate with the principal device.
- Nonprincipal devices cannot communicate with each other.
- Supported Switch Platforms
- The auto zoning feature is supported by Brocade FC switches that meet the following conditions:
- Fabric OS (FOS) version 9.1.1 or later
- The TDZ feature must be enabled on the switch ports where the storage target ports are connected. The feature can be enabled through the switch CLI. Refer to the switch documentation for detailed steps to enable this capability.
- The TDZ feature does not need to be enabled on host ports.
- Fabric Connectivity Requirements
- For auto zoning to operate correctly and predictably, the fabric connectivity of the storage
target ports and connected host ports must meet specific requirements. These considerations can
ensure consistent zone formation, proper path distribution, and stable host‑storage communication
across fabrics.
- Auto zoning is supported only for user‑defined FC portsets.
- All FC I/O port IDs included in an auto zoning enabled portset must demonstrate symmetric fabric connectivity. The same FC I/O port ID from every node in the cluster must be connected to the same fabric. For example, FC IO Port ID 1 from all nodes must connect to Fabric A.
- Ideally, a portset should contain an equal number of target FC I/O ports from each redundant fabric.
- Host initiator ports must be connected to the same fabric as the target ports defined within the portset that you want to associate with host object.
- Auto zoning is supported only for user‑defined FC portsets.
- Auto zoning Workflow
- To start using auto zoning when adding new hosts in an existing configuration:
- Ensure that fabric requirements are met as described here.
- Create an auto zoning enabled Fibre Channel portset.
- Create the host by using the FDMI hostname or by specifying a list of host port WWPNs.
- Associate host with the auto zoning enabled portset.
- Zoning policies
-
Auto zoning supports configurable zoning policies that determine how initiator WWPNs are matched with target port WWPNs when forming peer zones.
Zoning policies are applied at the portset level, and different portsets can use different policies based on deployment requirements.
Available zoning policies are:- one_to_one connectivity
-
This policy enforces a single path between every initiator port and one target port on each canister within the same fabric.
- Each initiator WWPN is zoned with only one FC IO Port ID from the portset. Each FC IO Port ID represents a single target port from each canister.
- Zoning is performed between the initiator port and the portset target port, provided both ports are within the same fabric.
- Initiator ports are evenly distributed across all target ports from the portset to achieve better workload distribution.
- After zoning, that initiator WWPN is not zoned with any additional target ports from the portset, even if multiple suitable target ports exist on that fabric.
- one_to_one_all_fabrics
-
This policy also creates one‑to‑one zoning, but it is designed for host environments where initiator WWPNs can not be visible on all fabrics during initial discovery.
- Each initiator WWPN is zoned with one target port WWPN from each fabric but will have a single host path from fabric where Host port is connected .
- This policy is useful where initiator WWPNs do not appear upfront in the fabric
- SAN‑boot scenarios
- NPIV (N_Port ID Virtualization) WWPN configurations
- Zone Naming Convention
-
Zones are created that uses the following format,
IBM_SV_<Target Port WWPN>_00, for example, IBM_SV_50050768101655c4_00.This predictable naming scheme allows easy identification of IBM Storage managed zones in the switch fabric.
- Zone Name Prefix Configuration
-
Storage administrator has an option to specify a zone name prefix for all automatically created zones. This prefix can be configured by using the chsystem command.
- The zone name prefix must be configured before the first auto zoning enabled portset is created on the system.
- Once the prefix is set on a system that contains any auto‑zone‑enabled portset, the prefix cannot be modified.
-
If no prefix is provided, the default naming scheme is used, for example,
IBM_SV_<target-WWPN>_00.
This mechanism ensures naming consistency across all auto‑generated zones in the fabric.
If the user configures the prefix FC_Infinity, it is automatically prepended to the zone name. For example, if Prefix is FC_Infinity and Target Port WWPN is 50050768101655c4, then the resulting auto‑generated zone name becomes: FC_Infinity_IBM_SV_50050768101655c4_00.
This example demonstrates how user‑defined prefixes are applied to the default zone naming format.
For information on auto zoning, see Auto zoning.
Paths to hosts
The total supported number of paths for a host to a volume must not exceed 4 per node canister, for a total of 8 per system. If high availability is configured between systems, then the limit remains 4 per node canister for a total of 16 paths across both systems. Configurations in which this number is exceeded are not supported.
Portset avoids wrong zoning configuration and limits the number of paths though which hosts can access external storage devices. Zoning host ports to too many ports of external system can create redundant paths. Also, unnecessary host login to too many Fibre Channel ports of external system consumes resources and leads to non-uniform distribution of host login on ports. Portsets limit the number of paths that the host can access to external storage system and distributes the host login evenly across multiple ports. For more information on Fibre Channel portsets, refer to the Portsets.
Fibre Channel portset is mapped with Fibre Channel port and host object. The host port WWPNs must be zoned only to external system Fibre Channel ports that are associated with the portset. If host ports are zoned to any other Fibre Channel port, the system notifies non-porset member login. Such logins can be removed by switching zones. An administrator can identify such logins by using the lsfabric command. The mkhost command shows host login counts on ports and enable the administrator to identify less loaded portsets. The command also shows the WWPNs that an administrator can use with host ports for zoning.
Adding WWPN to portsets provides greater flexibility and can be used for specific functional use cases. For example, SCSI host WWPN on a Fibre Channel port can be added to Portset1, whereas NVMeFC host WWPN on that port can be added to different portset.
To find the worldwide port names (WWPNs) that are required to set up Fibre Channel zoning with hosts, use the lstargetportfc command. This command also displays the current failover status of host I/O ports.
Portset avoids wrong zoning configuration and limits the number of paths though which hosts can access external storage devices. Zoning host ports to too many ports of external system can create redundant paths. Also, unnecessary host login to too many Fibre Channel ports of external system consumes resources and leads to non-uniform distribution of host login on ports. Portsets limit the number of paths that the host can access to external storage system and distributes the host login evenly across multiple ports. For more information on Fibre Channel portsets, refer to the Fibre Channel portset.
Host zones
The configuration rules for host zones are different depending upon the number of hosts that access the system. For configurations of fewer than 64 hosts per system, the system supports a simple set of zoning rules that enable a small set of host zones to be created for different environments. For configurations of more than 64 hosts per system, the system supports a more restrictive set of host zoning rules.
Zoning that contains host HBAs must ensure host HBAs in dissimilar hosts or dissimilar HBAs are in separate zones. Dissimilar hosts means that the hosts are running different operating systems or are different hardware products; thus different levels of the same operating system are regarded as similar.
You can map each volume that you create on the system to multiple HBA Fibre Channel ports in a specific host. There can also be multiple paths across the SAN. For these reasons, each host must run multipathing software, such as the Microsoft Device Specific Module (MSDSM). The multipathing software manages the many paths that are available to the volume and presents a single storage device to the operating system. The system supports various multipathing software packages. The specific multipathing software that the system supports depends on the host operating system that you use the software with.
- Each system node has multiple ports and each I/O group has two system nodes. Therefore, without any zoning, the number of paths to a volume is the number of system ports in the I/O group times the number of host ports.
- This rule exists to limit the number of paths that must be resolved by the multipathing device driver.
If you want to restrict the number of paths to a host, zone the switches so that each HBA port is zoned with one system port for each node in the clustered system. If a host has multiple HBA ports, zone each port to a different set of system ports to maximize performance and redundancy.
To obtain the best overall performance of the system and to prevent overloading, the workload to each port must be equal. This can typically involve zoning approximately the same number of host Fibre Channel ports to each Fibre Channel port.
- Systems with fewer than 64 hosts
-
For systems with fewer than 64 hosts that are attached, zones that contain host HBAs must contain no more than 40 initiators, including the ports that act as initiators. A configuration that exceeds 40 initiators is not supported. A valid zone can be 32 host ports plus 8 ports. When it is possible, place each HBA port in a host that connects to a node into a separate zone. Include exactly one port from each node in the I/O groups that are associated with this host. This type of host zoning is not mandatory, but is preferred for smaller configurations.Note: If the switch vendor recommends fewer ports per zone for a particular SAN, the rules that are imposed by the vendor take precedence over system rules.
- Systems with more than 64 hosts
-
Each HBA port must be in a separate zone and each zone must contain exactly one port from each node in each I/O group that the host accesses.