WIZnet Developer Forum

UDP sends garbage when concurrent TCP socket is active

I have an application using a STM32 olimexino board and a W5100 shield. It uses FreeRTOS and runs 4 tasks, of which one task handles a NTP client and another task handles a Web server.
The Ethernet library has been modified by myself by adding mutexes around all SPI accesses to guarantee the complete, undisturbed transmission of the read, write and write command operations, in this multitasking environment.
The NTP client and the web server work perfectly, when used separately.
However, if I use them concurrently, the UDP transmission is corrupted by the Web server activity, and not the contrary (the web server works unaffected by the concurrent operation).
What happens is this.
Normally the NTP client sends every 20 seconds a UDP datagram of 48 bytes according to the NTP format, and expects a UDP datagram in response.
After some time of concurrent operation, the UDP datagram I record using Wireshark has a length of 96 and contains 48 bytes of garbage, followed by the correct 48 bytes of the NTP structure.
Of course this datagram receives no response from the server. At the next attempt to send a request, the datagram lengh is 144 and contains 96 bytes of garbage, followed by the correct 48 bytes of the NTP structure.
I cannot imagine how garbage data could be inserted BEFORE the data I want to send, nor why this occurs only when the Web server is processing its own traffic.

Are you using the iolibrary we provide?
If no, first you need to use the iolibrary for test.

Copyright © 2017 WIZnet Co., Ltd. All Rights Reserved.