PACKING

Read syntax diagramSkip visual syntax diagram
   .-PACKING--=--OFF--------------.   
>>-+------------------------------+----------------------------><
   '-PACKING--=--+-ON-----------+-'   
                 '-max_pdu_size-'     

Specifies whether to disable or enable HPDT packing. Use PACKING=max_pdu_size, varying max_pdu_size to tune HPDT packing.

PACKING=OFF
Specifies the packing size limit to zero which, in effect, disables the function. PACKING=OFF is the default.
PACKING=ON
Enables HPDT packing by setting a packing size limit of 2048.
PACKING=max_pdu_size
Enables HPDT packing by setting a packing size limit of max_pdu_size. The valid range for max_pdu_size is 1024 to 8192 inclusive.

Protocol Data Unit (PDU) is the MPC term for a logical piece of data to be read or written. A PDU consists of protocol headers, likely followed by application data. Protocol headers include the IP header, the HPR NHDR/THDR, the SNA transmission header (TH), and the SNA request header (RH).

The HPDT device driver compares the size of each outbound PDU to the packing size limit. If the PDU size is less than or equal, and other criteria are met, the PDU is physically moved to and transmitted from a packing buffer. If the PDU size is larger, the PDU will be transmitted in the traditional manner using Indirect-Data-Address Word (IDAW) relocation.

HPDT packing is a compromise between an increase in storage and CPU consumption in order to increase throughput by improving channel utilization. In the host-to-router or host-to-channel extender configuration, where bottlenecks may be the channel bandwidth or adjacent link station capacity, it is likely the benefit of the increase in throughput would exceed the cost of additional host storage and CPU. In other configurations, the cost of the packing buffers or CPU resource will exceed the throughput benefits.

It is recommended HPDT packing remain disabled for the following conditions:
  1. On systems that are storage or CPU constrained.
  2. When the majority of the PDUs are ineligible (for example, TCP/IP traffic), which will cause the packing buffers to be allocated but under used.
  3. For MPC groups containing multiple write devices. The storage cost could be excessive and therefore likely outweigh the throughput benefit.
If none of the above conditions is a concern, and HPDT packing appears beneficial, the following steps are recommendations:
  1. Adjust the write data segment size such that it's equal to one of the CSM data space pool sizes (4K, 16K, 32K, or 60K). For more information, see the information about channel connections between APPN nodes in the z/OS Communications Server: SNA Network Implementation Guide.
  2. Review the SNA RU size (logmode entries) for native SNA traffic and the TCP/IP MTU size for Enterprise Extender traffic.

If max_pdu_size is specified within the range, and MPCLEVEL is not HPDT or the connection is not point-to-point, then the value is accepted but ignored.