W5300 possible corruption during PC booting? auto-MDI/MDI-x?



I have designed an embedded board, ADC + FPGA + w5300 chip. The purpose is to stream ADC data to a PC computer, using UDP mode. It is working very well, data transfer rates are fast and stable, but there is a strange phenomenon:

The board has two modes, standby and streaming.

In standby mode, w5300 sends out a broadcast packet every second to MAC FF:FF:FF:FF:FF:FF to announce its presence. It also listens on another socket for incoming commands. One such command is “set destination IP address”, and another command is “start streaming mode”.

In streaming mode, ADC data is sent out in approximately 1200 UDP packets per second, or around 1.7 MBytes/s. This also works very well, we can even do 3.4Mbytes/s easily in this way (note 1). However, when the embedded board is connected directly to a PC (without a switch or router), there can be a permanent corruption of the w5300 TX register when the PC is shut down, and then booted again, while the w5300 keeps running. When this happens, the UDP data is still being sent out on time, to the correct IP and MAC destination, and with the correct size, but the data contents are corrupted (possibly random). The UDP payload no longer is correct.

This type of corruption does not appear to happen in standby mode. Meaning, if I reboot the PC while the embedded board is in standby mode, and then I later “start streaming mode”, the w5300 seems to work normally.

However, when the PC is rebooted while the w5300 continues to operate in streaming mode, then the corruption happens. The only way to reset the w5300 corruption is to power-cycle or reset the w5300. Simply closing and re-opening the socket does not seem to help.

The moment of w5300 corruption appears to happen very early in the PC boot process - it appears to happen at around the same time that the BIOS screen shows up. When I disconnect the w5300 ethernet cable in that time, and reconnect later, then the corruption issue does not happen.

I am wondering if this is some kind of issue related to auto-MDI / MDI-X ? Maybe the PC ethernet adapter and the w5300 are attempting to negotiate auto-MDI / MIDX, and the active UDP socket SEND commands are confusing this process?

It is very difficult to try and analyze the corruptiong ethernet traffic, because monitoring the ethernet line at boot time can only be done on the physical layer (L0), and I don’t have the tools to do this properly.

Is there a way to disable MDI/MDI-X on the w5300 chip? For example, what happens if I set manual full-duplex 100Mbps instead of auto-negotiation? Will this also disable auto-MDI/MDIX on the w5300?

What happens when the FPGA tries to assert sockSEND() on the w5300 when the link is down, or when the link is currently being auto-negotiated for speed, duplex, and MDI/X? Can this cause the type of problem I observe?

Any tips or advice is greatly appreciated.

Note 1:
3.4Mbyte/s data throughput from w5300 to PC via UDP works best when the PC is running Linux, because Linux will not easily (never?) drop UDP packets. No packet loss was observed in Linux. For Windows, using the same PC, some degree of UDP packet loss might happen, depending on the PC configuration.


In short, Your question is for W5300 how to work in unlink status, am I rigtht?

W5300 doesn’t check the status of link. W5300 operates as the same, regardless link or un-link.
Un-link stuatus are many case.
Ethernet isn’t on link physically. Additionally, No reponse to request as ARP and any packet is logically unlink.

When unlink and try to send data thru UDP socket, ARP timeout will be occurred.
The more ARP-timeout causes the decreament of TX memory free size. Finally, TX memory is full but no more send.

Here, If you don’t check it TX memory free size(Sn_TX_FSR register) and copy send data to TX memory, Send data in TX memory is invalid.

As your phenomenon, I guess that you don’t check the free size of TX memory(Sn_TX_FSR) and continuously send data to still unlink peer.

To safely continue to send data to unlink peer, I recommend the below pseudo code.

// s : socket num
// buf : send data
// send_size : data size to be sent
// wait to tx buffer is available.

tx_free_size = getSn_TX_FSR(s)
} while(send_size > tx_free_size)
// copy data to tx memory
for(i=0; i < len; i++) Sn_TX_FIFO = *buf++
// Send command
isr = getSn_IR(s);
if(isr & Sn_IR_SENDOK)
setSn_IR(s, Sn_IR_SENDOK); // clear Sn_IR_SENDOK
if(isr & Sn_IR_TIMEOUT)
setSn_IR(Sn_IR_TIMEOUT); // clear Sn_IR_TIMEOUT
close(s); // close the socket
// Re-open the socket with same port & flag
return send_size;

thank you.