WIZnet Developer Forum

DNS Malformed Packet

I’m trying to set up DNS using the W5100S and ioLibrary on my TI CC1310 (32-bit) ARM processor.

I’m running into a problem where the data that is arriving at the gateway is completely different to the data in the buffer that I am initialising with dns_init() and sending with dns_run(). I can see this buffer being correctly constructed with all the right DNS packet data, but I see it coming out on wireshark as ‘Malformed’.

My code:

// Resolve an IP address from given domain name
static void dnsRequest(void)//(void* domain, uint8_t* ip)
{
    int32_t ret;
    uint8_t dns[] = {192,168,1,101}; // temporarily use PC IP to view dns request

    DNS_init(SOCKET_DNS, dnsBuf);  // SOCKET_DNS = 3; dnsBuf length = 256;

    ret = DNS_run(&dns[0], domainName, domainIp); // TOOD doesn't work (wrong
    if(ret){
        ret=0;
    }

}

The packet:

I can see the packet in its correct form all the way through to wiz_send_data() in w5100s.c, so I am fairly certain the data at the address in the buffer pointer is not being corrupted.

Anyone have any idea?


I tested DNS using the W5100S-EVB and ioLibrary. But it worked well.
I see flag is wrong in your packet unlike my packet.
Opcode is not 1010, it should be 0000. Likewise, Questions is not 4164, it should be 1.
So i think there may be a problem with your hardware, interface or environment.

Hi Becky,

I think this might be related to another problem I am seeing.

My packets from socket 0 and socket 1 are as I would expect, but everything coming out of socket 3 and socket 4 is not what I am sending at all. DNS was tested on socket 4

Here I am sending the exact same packet, from the same buffer on socket 2 and socket 3, with different outputs. Socket 2 is the correct data in the left of the screenshot below.

static void logPacketOverEthernet(void)
{

    uint8_t test[] = "testo\n";
    int32_t udpTxStatus;

    uint8_t testIP[] = {192,168,1,101}; 

    // Send the packet
    udpTxStatus = sendto(SOCKET_UDP1, &test, sizeof(test), &testIP[0], defaultUdpDstPort); 
    udpTxStatus = sendto(SOCKET_UDP2, &test, sizeof(test), &testIP[0], defaultUdpDstPort);

    if(udpTxStatus<1){
        // failed
    }

}

My initialisation is here:

void w5100s_init(void)
{

    // Pull the chip out of reset
    PIN_setOutputValue(ethPinHandle, Board_ETHERNET_RST, 1);
    // Wait 62ms for startup
    Task_sleep(62*(1000/Clock_tickPeriod));

    intr_kind temp = IK_SOCK_ALL;
    unsigned char W5100S_AdrSet[2][4] = {{2,2,2,2},{2,2,2,2}};
    /*
     */
    temp = IK_DEST_UNREACH;

    if(ctlwizchip(CW_INIT_WIZCHIP,(void*)W5100S_AdrSet) == -1)
    {
        printf("W5100S initialized fail.\r\n");
    }

    if(ctlwizchip(CW_SET_INTRMASK,&temp) == -1)
    {
        printf("W5100S interrupt\r\n");
    }

    uint8_t tmp1, tmp2;
    uint8_t phyChecks = 0;

    // Check a link exists (i.e. Ethernet connectet)
    while(1){
        ctlwizchip(CW_GET_PHYLINK, &tmp1 );
        ctlwizchip(CW_GET_PHYLINK, &tmp2 );
        if(tmp1==PHY_LINK_ON && tmp2==PHY_LINK_ON) break;
        phyChecks++;
        if(phyChecks>4) break; // TODO Use this to return 0; and do better management of Ethernet in general
    }

    ctlnetwork(CN_SET_NETINFO, (void*) &gWIZNETINFO);

    /* Initialise UDP socket for Etherbridge Multicast communication */
    do{
        setSn_RTR(SOCKET_UDP2, 2000)                         // Retry time: 2000*100us = 0.2s
        setSn_RCR(SOCKET_UDP2, 2);                           // Num retries
        setSn_IMR(SOCKET_UDP2, 0xFF-6);                      // Int Mask Reg (Turn off Recv, Dcon, Con interrupts)
        setSn_MR(SOCKET_UDP2, Sn_MR_MC | Sn_MR_MULTI | Sn_MR_UDP); // Multcast, UDP 
        setSn_MR2(SOCKET_UDP2,0x20);                         // Block UDP broadcast packets
        setSn_PORT(SOCKET_UDP2, defaultUdpSrcPort);          // Source port
        setSn_DPORT(SOCKET_UDP2, defaultUdpDstPort);         // Destination port
        setSn_CR(SOCKET_UDP2, Sn_CR_OPEN);                   // Command Reg: open socket
    } while(getSn_SR(SOCKET_UDP2) != SOCK_UDP);              // Check socket has been configured and started

    /* Initialise the socket for UDP datalogging */
    do{
        setSn_RTR(SOCKET_UDP1, 2000)                         // Retry time: 2000*100us = 0.2s
        setSn_RCR(SOCKET_UDP1, 2);                           // Num retries
        setSn_IMR(SOCKET_UDP1, 0xFF-6);                      // Int Mask Reg (Turn off Recv, Dcon, Con interrupts)
        setSn_MR(SOCKET_UDP1, Sn_MR_MC | Sn_MR_MULTI | Sn_MR_UDP);  // Multcast, UDP 
        setSn_MR2(SOCKET_UDP1,0x20);                         // Block UDP broadcast packets
        setSn_PORT(SOCKET_UDP1, defaultUdpSrcPort);          // Source port
        setSn_DPORT(SOCKET_UDP1, defaultUdpDstPort);         // Destination port
        setSn_CR(SOCKET_UDP1, Sn_CR_OPEN);                   // Command Reg: open socket
    } while(getSn_SR(SOCKET_UDP1) != SOCK_UDP);              // Check socket has been configured and started

}

I thought it could be a RX/TX memory allocation issue, but it seems the default is 2kB RX/TX for each socket. I have checked that the RMSR and TMSR registers on the device are set to the default (0x55).

As a follow up to this, I’ve also read back the socket Tx buffer on the W5100S and have seen that the memory does in fact store the correct data.

My process was:

  1. WIZCHIP_READ_BUF() to read socket tx buffer -> view in debugger
  2. initialise new packet data buffer
  3. call sendto() function, passing the packet data buffer
  4. WIZCHIP_READ_BUF() to read socket tx buffer -> observe change in data in that buffer

So it seems the data is being written where it’s being told…

When wizchip_send_data is called I can see where the data is being put in memory, which seems to be based on some calculations and a call to getSn_TxBASE(Sn).

I’ve looked the values for each socket:

  • Socket 0: 0x4000 (txBase)
  • Socket 1: 0x4400
  • Socket 2: 0x4800
  • Socket 3: 0x4A00

You’ll notice that there is only 1KB between those memory locations, but in my initialisation sets all Tx and Rx buffers at 2KB!

So it seems that the chip is assuming the buffers are 2KB long, but the driver is treating them as if they are 1KB.

Funnily enough, if I send data to socket 2 (txBase+21KB), then send to socket 1 (txBase+11KB), I will see nonsense in the first packet and the first packet’s data coming from the second packet:

  • Send “Sck2” to socket 2 -> nonsense packet comes out
  • Send “Sck2” to socket 1 -> “Sck2” comes out

Either the ioLibrary driver has an error in it, or I am missing an additional step to configure the socket Tx buffer size in the chip.

This is the W5500’s Memory organization. The W5100S is similar to this.
Therefore,

it is not true. There is 2KB of physical memory.

If you can give me your firmware, I’ll check. Please email me at becky@wiznet.io.

I’m not near my computer right now, but I can send firmware later.

I know there is supposed to be 2kB of memory, but there definitely is only 1kB between socket memories when the tx memory is written. I can literally see the memory locations it is trying to access and they are 1024 bytes apart!

So what I am seeing is the ioLibrary is storing the socket TX buffers only 1kB apart, but reading as if they are 2KB apart. This explains a few other issues I had been seeing with incorrect packet data. I had mistakenly thought socket 0 and socket 1 worked, but socket 1 was in fact just seeing what I had previously tried to write from socket 2.

I feel like I may have just missed some sort of configuration step because as you say this is not how it’s supposed to be. Can you look at the w5100_init function I posted and let me know if I have missed anything? Or are there any other strange config options/defines required?

For now I have implemented a quick hacky fix which multiplies the socket tx memory offset by two so it stores the packet in the correct location for the chip to send from.

uint32_t getSn_TxBASE(uint8_t sn)
{
   int8_t  i;
#if ( _WIZCHIP_IO_MODE_ == _WIZCHIP_IO_MODE_BUS_DIR_)
   uint32_t txbase = _W5100S_IO_BASE_ + _WIZCHIP_IO_TXBUF_;
#else   
   uint32_t txbase = _WIZCHIP_IO_TXBUF_;
#endif   
   for(i = 0; i < sn; i++)
      txbase += (getSn_TxMAX(i)<<1); // HACKY FIX: bit shift to work for 2KB memory sizes!
   return txbase;
}

Hi,

I’ve already done as much as I feel I can to resolve this myself… I spent an entire day troubleshooting this to find and understand the source of the problem before implementing a fix that I feel may not be necessary.

I would really appreciate some help from WIZnet to figure out how this problem happened and how to resolve it without implementing a hacky fix to the driver.

Thanks

Still waiting on a real resolution for this!

I’m also apparently not getting the correct information back from my DNS request. I can see I am getting a packet received, but it contains no information. I’m thinking it could be a similar issue to this memory misalignment.

STILL waiting on a reply from WIZnet on this issue!

Sorry,

I thought you solved the problem because of this post.
In fact, I have experienced this situation.
I also debugged it like you did to make sure the data fits in buf, but the actual packet was sent with different data.
So i downloaded the latest version of iolibrary and created a new project, thus the same code worked.
I think there is a problem with your project configuration or interface.
So hope you like me too.

Copyright © 2017 WIZnet Co., Ltd. All Rights Reserved.