What is HTTP Persistence?
Whether knowingly or unknowingly, HTTP persistence is likely something you encounter on a regular basis when using your DataPower to communicate with backends and clients alike. By default HTTP persistence is enabled, though it can be manually disabled. HTTP persistence allows clients and servers to continue to exchange HTTP requests and responses without the overhead of establishing a new TCP connection for each request. Instead a TCP connection will be established and can be re-used multiple times.
For many DataPower customers HTTP persistence is vital and plays a role in allowing you to handle a large number of transactions efficiently. This is especially beneficial in the case of HTTPS as you also do not have to establish a full SSL connection for every HTTP request, allowing you to reduce the overhead that is added by the SSL handshake.
How is Persistence Configured/Controlled?
For most systems, persistence is something that can be toggled on or off. When persistence is enabled, after connections are established and used for an HTTP request/response, they will be left open for re-use. A connection in this state is "idle", and can be used to send new HTTP requests instead of opening a new connection. In general there are timeouts configured that allow a system to close/reclaim a connection if it has sat idle for some period of time.
For DataPower these timeouts are controlled by the “front persistent timeout” and “back persistent timeout”. By default these have a value of 180 seconds. This means we will allow a connection to remain idle for 180 seconds before we consider it ready for closure. Many other clients and servers will refer to this as an “idle timeout”, that will typically be configurable. Every implementation potentially has its own default value.
The following product documentation pages discuss setting these timeouts on DataPower:
HTTP Race Conditions and Tuning
There are some potential negative consequences that can occur when using HTTP persistence. When a client and a server agree to re-use a connection, it is done with the understanding that either side can agree to close out this connection when desired. To ensure that connections do not remain open forever, especially when sitting idle and not used, almost all servers and clients will implement a timeout that triggers the closure of an idle connection.
This introduces a type of race condition. If a connection is sitting idle, at some point either the client or the server will decide to close this idle connection. However in some cases when the timing is perfect, the client may attempt to send a new HTTP request over an idle connection at the same time the backend decides to close this TCP connection.
The simplest way to plan for this type of situation and to reduce/eliminate it is through tuning of these timeout settings. The ideal situation is that the client timeout should be about 30 seconds less than the server’s timeout. This ensures that the client will not re-use a connection that the server is about to close due to idle timeout.
Since DataPower can act as both a client and a server in a single transaction, it is recommended to ensure you understand the timeouts that are enforced by both your backend servers as well as the clients that make calls into your DataPower. Consideration should also be given to any additional hops in your network, such as a load balancer, that may implement their own idle timeouts.
Typically when this behavior occurs, it will result in the following error message(s):
[domain][0x80e0062a][mpgw][error] mpgw(ServiceName): tid(1709989911)[ClientIP] gtid(1709989911): The header is empty when connecting to URL 'http://backendserver/.
[domain][0x80e0012b][mpgw][error] mpgw(ServiceName): tid(1709989911)[ClientIP] gtid(1709989911): Backside header ('N/A') failed to parse due to: Failed to process
response headers, URL: http://backendserver/
These errors indicate we made a HTTP request, but did not receive an HTTP response. This can happen for different reasons. A very common cause of this behavior is the backend server’s idle timeout causing the TCP connection to be closed before the HTTP response is sent.
To address this, consider tuning the timeouts as discussed above.