Inbound workload queuing

The OSA-Express feature with QDIO and the Network Express feature with EQDIO can both perform inbound traffic sorting by placing inbound packets for differing workload types on separate inbound processing queues.

This function is called OSA inbound workload queuing (IWQ). For OSA-Express IWQ must be enabled. For Network Express IWQ is integrated into the base EQDIO support and is not configurable.

With the inbound traffic stream already sorted by the OSA feature, z/OS Communications Server provides the following performance optimizations:
  • Finer tuning of read-side interrupt frequency to match the latency demands of the various workloads that are serviced
  • Improved multiprocessor scalability, because the multiple OSA input queues are now efficiently serviced in parallel
  • Optimized acceleration of traffic, using QDIO (EQDIO) Accelerator processing only for eligible input queues instead of for all input traffic.
Network Express provides the following additional optimizations for each input queue:
  • Dynamic storage management based on volume of workload.
  • Advanced fairness algorithm to manage processing across multiple workloads.
  • Advanced optimizations for the reduction of data copies for acceleration.

z/OS Communications Server and the OSA feature establish a primary input queue and one or more ancillary input queues (AIQs), each with a unique read queue identifier (QID) for inbound traffic. z/OS Communications Server and the OSA feature cooperatively use the multiple queues in the following way:

  • Primary input queue: The primary input queue represents IP traffic that is for this z/OS instance and is not directed to a specific Ancillary Input Queue (AIQ) by OSA. In many cases, the primary traffic represents the main business application workloads for this z/OS instance. For OSA-Express, forwarded traffic also defaults to this queue. For Network Express, forwarded traffic can be routed to the IP router AIQ queue described below:
    Note: The remaining input queues are AIQs for traffic that is going through z/OS (Sysplex distributed, Enterprise Extender, zCX or IP routed) or traffic that might benefit from unique processing (bulk, Enterprise Extender, IPSec, or zCX). Separating this unique type of traffic from the primary traffic allows the primary traffic to be optimized or “streamlined” avoiding delays from unpacking, sorting and data copies. The AIQs are described along with a brief description of the associated benefits of each AIQ.
    • Sysplex distributor AIQ: The OSA feature directs an inbound packet (received on this interface) that is to be forwarded by the sysplex distributor to the sysplex distributor AIQ. z/OS Communications Server then tailors its processing for the sysplex distributor queue, notably by using the multiprocessor to service sysplex distributor traffic in parallel with traffic on the other queues. The sysplex distributor queue is eligible for Accelerator processing.
    • Bulk data AIQ: The TCP layer automatically detects connections operating in a bulk-data fashion (such as the FTP data connection), and these connections are registered to the receiving OSA feature as bulk-mode connections. The OSA feature then directs an inbound packet (received on this interface) for any registered bulk-mode connection to the TCP bulk- data AIQ. z/OS Communications Server tailors its processing for the bulk queue, notably by improving in-order packet delivery on multiprocessors, which likely results in improvements to CPU consumption and throughput. Interrupt processing using dynamic LAN idle settings and storage provisioning are also optimized for this AIQ. Like other AIQs, processing for data on the bulk queue can be in parallel with traffic on the other queues.
    • Enterprise Extender (EE) AIQ: The OSA feature directs an inbound Enterprise Extender packet (received on this interface) to the Enterprise Extender AIQ. This allows z/OS Communications Server to process inbound traffic on the Enterprise Extender queue in parallel with inbound traffic on the other queues for this interface. A key optimization of the EE AIQ is the use of CSM dataspace memory for the subsequent VTAM (SNA) stack processing avoiding data copies.
    • IPSec AIQ: The OSA feature directs inbound AH packets, ESP packets, and UDP-encapsulated ESP packets that are received on this interface to the IPSec AIQ. This allows z/OS Communications Server to process inbound traffic on the IPSec queue in parallel with inbound traffic on the other queues for this interface. A key benefit of this processing is the use of zIIP processing for this input data (when IPSec zIIP is configured).

      An IPSec packet can also be GRE-encapsulated when using VIPAROUTE for sysplex distribution. Network Express can recognize a GRE-encapsulated IPSec packet and direct it to the IPSec input queue on the target host allowing it to benefit from IPSec input queue zIIP processing. For OSA-Express, GRE-encapsulated IPSec traffic is directed to the primary (default) input queue.

    • zCX AIQ The OSA feature directs inbound packets (received on this interface) that are destined to a zCX server to the zCX AIQ. This allows z/OS Communications Server to process inbound traffic on the zCX queue in parallel with inbound traffic on the other queues for this interface. Additionally, it also allows for this inbound traffic to be processed on a System z® Integrated Information Processor (zIIP).

      GRE encapsulation is used for zCPA (zCX) forwarded traffic. Both OSA-Express and Network Express support directing the forwarded zCPA GRE traffic to the zCX input queue in the target host.

    • IP Router AIQ (Network Express only): The Network Express feature directs inbound packets that do not match any other ancillary input queue, default traffic to an IP router AIQ. If the packet is to be forwarded it can be processed by the Accelerator. This allows z/OS Communications Server to separate IP forwarded traffic from the primary traffic. Accelerator processing can be targeted to the IP router and sysplex distributor AIQs avoiding the cost of accelerator processing on the primary input queue.

      The IP Router input queue is created for a Network Express feature when datagram forwarding is enabled (DATAGRAMFWD on the IPCONFIG statement), QDIO Accelerator is enabled (QDIOACCELERATOR on the IPCONFIG statement), and ROUTEALL is configured for this interface (VMAC ROUTEALL on the INTERFACE statement).

  • The primary and bulk data input queues are always used (activated). The remaining input queues are only activated if the specific function or type of workload associated with the input queue is detected.
Restrictions:
  • IWQ is not supported for a z/OS guest on z/VM® using simulated (virtual) devices, such as virtual switch (VSWITCH) or guest LAN.
  • Bulk-mode TCP connection registration is supported only in configurations in which a single inbound interface is servicing the bulk-mode TCP connection. If a bulk-mode TCP connection detects that it is receiving data over multiple interfaces, IWQ is disabled for the TCP connection and inbound data from that point forward is delivered to the primary input queue.
  • The support for internally routed IWQ traffic over a shared OSA port differs based on the generation of the OSA feature as follows:
    • OSA-Express QDIO IWQ is not supported for traffic over a shared OSA port; this traffic is placed on the primary input queue of the target host.
    • Network Express EQDIO IWQ is supported for internally routed traffic over a shared Network Express port.
Guidelines: IWQ requires an additional amount of fixed 4K CSM HVCOMMON storage per active AIQ. The storage provisioned for IWQ is managed differently based on the generation of OSA as follows:
  • OSA-Express IWQ storage management: The WORKLOADQ parameter enables IWQ and requires an additional amount of fixed 4K CSM HVCOMMON storage per AIQ. The amount of storage consumed per AIQ is based on the amount of storage defined for READSTORAGE for this interface. The bulk AIQ is always backed with this additional CSM storage. The remaining AIQs are not backed with the additional CSM storage until the specific function (EE, SD, IPSec or zCX) is used. The EE AIQ is backed by fixed 4K CSM DSPACE64 storage (instead of HVCOMMON). To verify the amount of CSM storage that is being used for each input queue, display the VTAM TRLE name that is associated with the interface. The WORKLOADQ parameter also requires an additional 36K of ECSA per AIQ.
  • Network Express IWQ storage management: IWQ is integrated in the base support Network Express EQDIO and is not configurable. IWQ requires fixed 4K CSM HVCOMMON for each active AIQ. Storage required for Network Express inbound (read) processing is dynamically managed. The READSTORAGE parameter does not apply to Network Express interfaces. For additional information about the usage and monitoring of fixed storage for Network Express, see Fixed storage considerations for Network Express interfaces.
Tip: The additional CSM storage consumed by each OSA interface using IWQ also consumes fixed (real) storage. It is recommended that you verify that the additional fixed storage required by IWQ (per OSA interface) will not approach any of the following system limits:
  • The CSM FIXED MAXIMUM value used by Communications Server (use the D NET, CSM command and see the CSM FIXED MAXIMUM value defined in IVTPRM00)
  • The actual real storage available to this z/OS® system (see D M=STOR or D M=HIGH)
Guideline: IWQ is integrated in the base support Network Express and is not configurable. IWQ for OSA-Express must be enabled using the WORKLOADQ parameter. IWQ requires an additional amount of fixed 4K CSM HVCOMMON. For additional considerations for QDIO IWQ fixed storage requirements, see Steps for enabling OSA-Express QDIO inbound workload queuing.