it would be not a very big deal if there would be small drop in performance. But the issue makes TCP session completely inoperable, socket and process hangs.
Background: the task application is performing download of about 6 Megabytes of data (to be precise - plays PCM data through DAC). The process was completely operable for a long time, but about 1.5 months ago, something has changed. [color=#000080]In arbitrary time during communication, playback process stops, activity TX/RX LEDs turn off, and application hangs[/color].
My guess is that something has changed on the route from server to WIZnet chip, causing such behavior. However it is not a point that something has failed and WIZnet has no relation to it. Other computers on the same network segment work, do not hang or abort their TCP sessions.
I was digging into my chip’s driver for a while, hoping to find issue in the code. No, I was unable to find anything material. Then I started to dig into the communication. And I have found some very interesting things…
First of all, here’s the picture of “faulty” session. Below I will explain how it fails, and why. Open the picture in browser, right click on it and select “Save as…”, then open it in the image viewer so that you can switch between picture and this post interactively.
192.168.1.66 - client, W5100 with 2 kBytes TX and RX buffers
188.8.131.52 - web server, Linux
9570-9573: TCP connect, client says it has 2048 bytes RX buffer, server says it has 14600 bytes RX buffer.
9577-9678: server sends 1024+1024 bytes, fully filling chip’s RX buffer.
9579: server waits for ACK from client, as it knows it filled its buffer and should not send any more packets. Client responds that it now has 1024 bytes free, and expects next packet in sequence (this information is not visible on the picture).
9580: previous communication repeats several times, with server waiting for client having some space in its RX buffers through ACK packet.
9588: client is quicker than server, sending two informational packets, first that it has 1024 bytes free, second that it now has 2048 bytes free.
9589-9590: server gets it, and sends 1024+1024 to fully fill client’s buffer, and then it continues normally with 1024 bytes sent “per transaction”.
[color=#FF0000]So far it works well - from process and from speed points of view.[/color]
14964: client is saying to server that it has 2048 bytes free, thus server sends two packets 1024 bytes long to fill its RX buffer as it was doing before.
14965: however something happens on the network, and client receives only one packet. Client expects packet #2757633, but gets #2758657. While Wireshark and server think that client’s buffer is full, it is not.
14966: client discards packet out-of-order #2758657, and requests missing #2757633 saying that its buffer if empty. This means that client did not record packet out-of-order, however, and we will see further, that server expects client to have it recorded!
14967: server re-transmits requested packet #2757633, saying that next in sequence is #2758657 - the one which was said as out-of-order, and which was discarded by the client.
14968: client says to server that it has got last packet, and waits for #2758657 - thus there should be re-transmission of the frame which was previously discarded, and client indicates that its RX buffer is free - 2048 bytes.
14969: server sends packet #2759681 - the next after expected #2758657. Of course this packet #2759681 was deemed by client as out-of-order, and is discarded.
14970: client says that it wants #2758657, and its buffer is fully empty - 2048 bytes.
14971: server re-transmits requested packet #2758657, stating that next in sequence will be #2759681. Note that at this point packet #2759681 was discarded.
14972: client confirms previous packet, and asks for #2759681, which was discarded. Client’s buffer size is 2048 bytes (empty).
14973: however server, instead of sending #2759681, sends #2760705, as it deems that #2759681 is already at the client’s side. Why? Because client constantly says that its buffer is free, having 2048 bytes for next data packet in sequence. Server assumes that client has recorded missing packet, but instead client totally discarded it without telling it to the server.
And this one-packet-discarded delay continues. It would only slightly impact performance this way, BUT! Server is nervous and curious why client every time asks for re-transmission for what it was expected to have, and server increases time between re-transmissions.
14990-14991: 2 seconds
14998-14999: 4 seconds
15006-15007: 8 seconds
15014-15015: 16 seconds
15022-15023: 32 seconds
15030-15031: 64 seconds
15038-15039: 120 seconds
15046-15047: 120 seconds
15054-15055: 120 seconds
15062-15063: 120 seconds
15070-15071: 120 seconds
15078-15079: 120 seconds
15214-15215: 120 seconds
What we see here?
Speed went down to 1024 bytes per 120 seconds, or [color=#FF0000]8.5 bytes per second[/color]. I consider it as communication has hung.
What is the issue?
Client is not telling to server that it has discarded the packet it properly received and it does not have it any more, however server sees that client has enough space to keep this packet out-of-order saying that it has 2048 bytes free. It seems that TCP layer is having higher priority for sending sequenced packets first, and then perform requested re-transmissions. Thus client saying that it has 2048 bytes free causes server to send 1024 bytes from its sequence, and only then re-transmit asked packet. If client would say it has only 1024 bytes, then server would perform re-transmission, and communication would be synchronized.
How it can be fixed?
(a) By client keeping out-of-sequence packet, which server expects it to keep, in the buffer, do not ACK this packet being kept, but ACK and request missing packet telling to server that it has only 1024 bytes. Server will calculate that if it will send another packet in flight it will be lost, and re-transmit missing packet. This is most advanced way to recover from this issue.
(b) by any other means telling to server to stop its sequence and synchronize with client’s sequence.
Now I need to say here that what you see from TCP point of view of not a failure. It is NORMAL flow with distant transmission issues, and there’re proven and reliable mechanisms how to recover from such out-of-sequence packets. However keep in mind that most implementations assume relatively large RX buffer, and software (not SoC).
Let’s look at it how it is performed on the PC running Microsoft Windows:
You can see Windows machine experiences loss of packets in 457 and 459. Server keeps sending packets to client, while client is asking for re-transmission of one packet, #432161. Server keeps track of the buffer free at client side, and ACKs client sends. Last ACK client sent was requesting #432161 as next packet, thus all packets server sends it expects being buffered at client side. When “burst” is finished and server sees request for re-transmission of this poor #432161, it re-transmits this packet and client processes whole buffered packet chain at its side.