Long-distance requirements for partnerships

The links between systems in a partnership that are used for replication must meet specific configuration, latency, and distance requirements.

shows an example of a configuration that uses dual redundant fabrics that can be configured for Fibre Channel connections. Part of each fabric is at the local system and the remote system. There is no direct connection between the two fabrics.

You can use Fibre Channel extenders or SAN routers to increase the distance between two systems. Fibre Channel extenders transmit Fibre Channel packets across long links without changing the contents of the packets. SAN routers provide virtual N_ports on two or more SANs to extend the scope of the SAN. The SAN router distributes the traffic from one virtual N_port to the other virtual N_port. The two Fibre Channel fabrics are independent of each other. Therefore, N_ports on each of the fabrics cannot directly log in to each other. See the following website for specific firmware levels and the latest supported hardware:

www.ibm.com/support

If you use Fibre Channel extenders or SAN routers, you must meet the following requirements:

  • The maximum round-trip latency that is supported between sites depends on the type of partnership between the systems, the version of software, and the system hardware that is used. This restriction applies to all supported replication functions.

    The following table lists the maximum round-trip latency for each type of partnership.

  • In a Metro Mirror or non-cycling Global Mirror relationship, the bandwidth between two sites must meet the peak workload requirements and maintain the maximum round-trip latency between the sites. When you evaluate the workload requirement in a multiple-cycling Global Mirror relationship, you must consider the average write workload and the required synchronization copy bandwidth. If there are no active synchronization copies and no write I/O operations for volumes that are in the Metro Mirror or Global Mirror relationship, the system protocols operate with the bandwidth that is indicated in svc_mirrorsuptlongdistlinks_3he0ay.html#svc_mirrorsuptlongdistlinks_3he0ay__traffic. However, you can determine only the actual amount of bandwidth that is required for the link by considering the peak write bandwidth to volumes that are participating in Metro Mirror or Global Mirror relationships and then adding the peak write bandwidth to the peak synchronization bandwidth.
  • If the link between two sites is configured with redundancy so that it can tolerate single failures, the link must be sized so that the bandwidth and latency statements are correct during single failure conditions.
  • The configuration is tested to confirm that any failover mechanisms in the intersystem links interoperate satisfactorily with the systems.
  • All other configuration requirements are met.

If you use remote mirroring between systems with 80 - 250-ms round-trip latency, you must meet the following extra requirements:

  • Both the local and remote systems must support the higher round-trip latency.
  • A Fibre Channel partnership must exist between the systems, not an IP partnership.
  • All systems in the partnership must have a minimum software level of 7.4.0.
  • The RC buffer size setting must be 512 MB on each system in the partnership. This setting can be accomplished by running the chsystem -rcbuffersize 512 command on each system.
  • Two Fibre Channel ports on each node that is used for replication must be dedicated for replication traffic, by using SAN zoning and port masking.
  • SAN zoning should be applied to provide separate intersystem zones for each local-remote I/O group pair that is used for replication. Figure 1 illustrates this type of configuration.

In addition to the preceding list of requirements, the following guidelines are provided for optimizing performance:

  • Partnered systems should use the same number of nodes in each system for replication.
  • For maximum throughput in replication that uses Global Mirror, all nodes in each system should be used for replication, both in terms of balancing the preferred node assignment for volumes and for providing intersystem Fibre Channel connectivity.
  • On the system, provisioning dedicated node ports for local node-to-node traffic (by using port masking) isolates replication node-to-node traffic between the local nodes from other local SAN traffic. As a result, optimal response times can be achieved.
  • Where possible, use the minimum number of partnerships between systems. For example, assume site A contains systems A1 and A2, and site B contains systems B1 and B2. In this scenario, creating separate partnerships between pairs of systems (such as A1-B1 and A2-B2) offers greater performance for replication between sites than a configuration with partnerships that are defined between all four systems.

There is no limit on the Fibre Channel optical distance between the system nodes and host servers. You can attach a server to an edge switch in a core-edge configuration with the system at the core. The system can support up to three ISL hops in the fabric. Therefore, the host server and the system can be separated by up to five Fibre Channel links. If you use longwave small form-factor pluggable (SFP) transceivers, four of the Fibre Channel links can be up to 10 km long.