IBM Support



You can track all active APARs for this component.


APAR status

  • Closed as program error.

Error description

  • Problem Description:
    HADR reports "congestion" and transactions stall
    on primary database after LOAD operation is performed there.
    HADR reports "congestion" when it is unable to transfer all
    desired bytes into the comms layer (TCP/IP). HADR receives back
    a code on its send attempt that indicates that none, or only
    some, of the send buffer was transferred.
    Fix needed:  DB2 should allow the user to configure it to
    automatically drop out of Peer state when congestion is
    encountered.  Today, the user may as a workaround manually break
    the HADR connection.  But this requires monitoring and special
    scripting or human intervention.
    Background Information:
    There are two main causes of this in an HADR environment:
    1. competition for the comms line/network
    In this case, overall traffic or some specific network user(s)
    caused the network to saturate. This may be addressed by
    reconfiguring the network, or may be tolerated below a certain
    2. log shipping blocked at standby
    In this case, the HADR standby stops receiving log pages over
    TCP/IP because it has no room to place them in its receive
    buffer. When this happens, other buffering in the end-to-end
    pipeline, e.g., OS or comms system defined buffering in the
    sender and in the receiver, may also fill up. When the
    end-to-end pipeline is full, new sends from primary to standby
    over TCP/IP will fail. The condition should clear up, at least
    temporarily, when the standby makes forward progress on log
    replay, which frees up space in its receive buffer.
    Congestion on the HADR primary is accompanied by a negative
    performance impact. The impact may be tolerable or intolerable,
    depending on the frequency and duration, and the needs of a
    particular installation. It is especially problematic in Peer
    state; if the primary cannot send data to the standby, logging
    will be blocked. This is true in any synchronization mode; even
    ASYNC mode requires the primary to successfully pass the log
    data to the comms layer before considering a log flush (and thus
    a transaction commit) to be complete in Peer state. During
    Remote Catchup state, congestion can slow down the progress of
    the standby by interfering with log shipping, but it is unlikely
    to cause much direct impact on the primary's performance, since
    current logging is not waiting for log shipping to occur in this
    The second type of congestion identified above is often
    associated with the replay at the HADR standby of a large,
    granular operation, such as load or offline (classic) reorg. The
    replay of these operations can take a considerable amount of
    time, and the standby does not release the associated log record
    until its replay completes. If at the same time the primary is
    performing new logged operations, this can cause the standby's
    receive buffer to fill up.
    Here's an example of how congestion may occur in a LOAD COPY YES
    standby deactivated (or similar)
    load performed on primary
    ongoing (during/after load) logged operations performed on
    load copy image file transferred to standby (e.g., via ftp)
    standby re-activated
    standby replays log records up to "load" log record
    standby's replay gets stuck for some time on the "load" log
    HADR pair may enter Peer state as the receive buffer can
    initially accept the necessary log records
    standby's receive buffer fills
    OS (TCP/IP) buffering on standby fills
    OS (TCP/IP) buffering on primary fills
    primary no longer able to send to standby (congestion)
    primary blocked until congestion clears
    standby eventually finishes with "load" record and moves on,
    freeing receive buffer space
    standby again receives queued data from comms subsystem
    congestion clears
    standby works through receive buffer backlog; shorter periods of
    congestion possible until sufficient free receive space is made
    to stabilize ongoing log shipping

Local fix

  • The user faces a choice in how to respond to congestion. If the
    user prefers that the primary not be blocked, it is possible to
    break the connection between the primary and standby. For
    example, one can "bounce" (deactivate and re-activate) the
    standby database. This will cause HADR to enter Remote Catchup
    state, wherein replay of the big operation will not cause
    push-back on the primary. However, doing this will cause the
    standby to fall further behind the primary, increasing the
    failover time and increasing the risk of transaction loss in a
    failure (esp. where SYNC or NEARSYNC modes would apply in Peer
    state). If the user desires the primary and standby to be as
    consistent as possible, that user may have to accept that
    transactions may be blocked during replay of such big operations
    in order to keep the primary from running away from the standby.
    Some things that can help and are available now:
    1. For load copy yes, reduce standby outage time by using shared
    disk for load image file (preferred) or other measures
    (faster/wider comms to shorten ftp time; avoid ftp time
    altogether by using rehostable disk). Although this is not part
    of the congestion period shown in the steps above, it can help
    reduce the length of the congestion effect. By allowing the
    standby to restart sooner, this action may reduce the length of
    the backlog faced by the standby after the point of the "load"
    2. Configure DB2 to spool (at least a certain amount) of the log
    data in memory on the standby, by increasing the size of the
    standby's log receive buffer. This is set via the DB2 registry
    variable DB2_HADR_BUF_SIZE. The standby instance must be stopped
    and restarted for changes to take effect. The needed size
    depends on the anticipated log generation rate and the duration
    of the congestion.
    3. Stop an existing case of congestion by breaking the HADR
    connection. The preferred method would be to bounce the standby,
    though it may be possible to use another approach (e.g.,
    bouncing the network). The result is HADR leaves Peer state and
    will begin log shipping again in Remote Catchup state, where it
    won't push back on the primary. One can look at this as
    essentially a temporary spooling of yet to be shipped log on the
    primary's disk (or, if standby falls far enough behind, on the
    primary's log archive device). The drawback is that the standby
    is allowed to fall further behind. That may lengthen takeover
    (failover or role switch) time, and also increases the risk of
    transaction loss in case the primary site suffers a
    failure/disaster and unshipped log data (online or archive logs)
    are destroyed or not timely accessible.
    4. Reduce the duration of congestion impact by decreasing the
    granularity of replay. Consider using online reorg. Load or
    reorg smaller amounts of data per operation.
    5. Use "load copy no" and repair/rebuild target
    table/tablespace/database later.
    - when affected site is in primary role, e.g., after failover or
    role switch), or
    - by reinitializing the standby via a backup/restore/re-start

Problem summary

  • n/a

Problem conclusion

  • First fixed in DB2 UDB Version 9.5, FixPak 1

Temporary fix


APAR Information

  • APAR number


  • Reported component name


  • Reported component ID


  • Reported release


  • Status


  • PE




  • Special Attention


  • Submitted date


  • Closed date


  • Last modified date


  • APAR is sysrouted FROM one or more of the following:


  • APAR is sysrouted TO one or more of the following:

Fix information

  • Fixed component name


  • Fixed component ID


Applicable component levels

  • R950 PSY


[{"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSEPGG","label":"DB2 for Linux, UNIX and Windows"},"Component":"","ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"950","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
02 May 2008