I am doing a governance system through a datapower XI50 with firmware 188.8.131.52 (we upgraded yesterday to try it with the very last version).
By governance, I mean I want to send all requests / responses to a remote tcp server.
For that I have an xml firewall which is the entry point for all requests.
I tried different things and all are working the same.
I did first a url-open, then changed it to a results asynchronous action and finally tried with a MPG sending to a tcp backend.
For all these 3 possibilities I saw the message is well sent BUT the dp is waiting a time before sending the "FIN" to the tcp server to close the socket.
Because there is a lot of requests, the pending messages to send are waiting the socket to close before processing the next one. I guess there is a kind of queuing system. This "queue" becomes full and memory becomes full as well.
I saw with wireshark it took around 2 minutes to close the socket but the message is sent in 2 seconds.
Is there a way to set the dp to close the socket straight after sending the complete message.
By the way, I don't understand why the dp is waiting to close the socket event though he knows the whole message is sent.
Please help and advise me
This topic has been locked.
8 replies Latest Post - 2013-01-21T13:47:22Z by tourlourou
Pinned topic TCP socket wait too long before closing
Answered question This question has been answered.
Unanswered question This question has not been answered yet.
Updated on 2013-01-21T13:47:22Z at 2013-01-21T13:47:22Z by tourlourou
msiebler 2700005RPQ140 Posts
Re: TCP socket wait too long before closing2013-01-18T12:05:10Z in response to msieblerI'm using TCP protocol. tcp://host:port
In fact after sending the message, the dp is waiting about 2 minutes to send a "FIN.
My server or netcat replies with an "ACK" and sends a "FIN as well.
Then the dp replies an "ACK" and the socket is closed.
I would like to get rid of the waiting period before dp sends a "FIN".
msiebler 2700005RPQ140 Posts
Re: TCP socket wait too long before closing2013-01-18T13:58:09Z in response to msieblerOn a protocol point of view, the "ACK are well returned to DP at each step.
On a transaction point of view, I'm testing with netcat which is not foreseen to return a reply to the sender as it is tcp.
Knowing the dp is waiting for some response from the server is a step further for me.
As I'm implementing the tcp server, so I can reply "something".
Could you post an example of the syntax/content of the reply the dp is waiting for?
SystemAdmin 110000D4XK6772 PostsACCEPTED ANSWER
Re: TCP socket wait too long before closing2013-01-18T14:54:47Z in response to tourlourouIt's up to the higher-level protocol to provide some sort of "framing" so that the server will know when it's read the entire request message. This could be as simple as starting the request message with a length field.
After the server has sent its response (if it has one), it should close the TCP connection. This will indicate to DP that the transaction is complete and the url-open will return.
Re: TCP socket wait too long before closing2013-01-21T10:01:34Z in response to SystemAdminI made a test of your suggestion even if I found it weird. For me the dp, which knows what it have to send, should send a "FIN, ACK" after it receives all the "ACK" for each packets it sent. I see often, with wireshark, it resends packet when the "ACK is not received fast enough. So on a protocol point of view the dp knows when to send a "FIN".
Anyway, here's the result of my test:
The message is well received (= complete).
The server is sending the "FIN, ACK", the dp replies a "FIN, ACK", the server replies a "ACK" and closes the socket.
BUT the sender receives an "Internal Error" and in the dp log I see this message:
"Backside header ('N/A') failed to parse due to: Failed to establish a backside connection, URL: tcp://10.2.6.31:4444/"
So the dp is interpreting the socket closing as an error.
There is still something weird.
Re: TCP socket wait too long before closing2013-01-21T13:47:22Z in response to tourlourouThe previous test was run through a MPG. But if I use a Result Asynchronous Action, there is no error in the log and no remaining port in the TCP Status.
Does the MPG handles the TCP connection other way than Result Asynchronous Action?