Why does this code buffer TCP output when I did not speak?

This is the code that I use to test the web server in an embedded product that felt bad when an HTTP request arrives fragmented through several TCP packets:

/* This is all within a loop that cycles size_chunk up to the size of the whole * test request, in order to test all possible fragment sizes. */ TcpClient client_sensor = new TcpClient(NAME_MODULE, 80); client_sensor.Client.NoDelay = true; /* SHOULD force the TCP socket to send the packets in exactly the chunks we tell it to, rather than buffering the output. */ /* I have also tried just "client_sensor.NoDelay = true, with no luck. */ client_sensor.Client.SendBufferSize = size_chunk; /* Added in a desperate attempt to fix the problem before posting my shameful ignorance on stackoverflow. */ for (int j = 0; j < TEST_HEADERS.Length; j += size_chunk) { String request_fragment = TEST_HEADERS.Substring(j, (TEST_HEADERS.Length < j + size_chunk) ? (TEST_HEADERS.Length - j) : size_chunk); client_sensor.Client.Send(Encoding.ASCII.GetBytes(request_fragment)); client_sensor.GetStream().Flush(); } /* Test stuff goes here, check that the embedded web server responded correctly, etc.. */ 

Looking at Wireshark, I see only one TCP packet that contains the entire test header, and not the expected header length / chunk size packets. I used NoDelay to turn off the Nagle algorithm earlier, and it usually works the way I expect. The online documentation for NoDelay at http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.nodelay%28v=vs.90%29.aspx specifically states: "Sends data immediately after call NetworkStream.Write "in the associated code sample, so I think I used it correctly all the time.

This happens regardless of whether I execute the code. Is the .NET runtime an optimization of my packet fragmentation?

I am running x64 Windows 7, .NET Framework 3.5, Visual Studio 2010.

+4
source share
4 answers

Grr. It was my antivirus interfering. A recent update made him start interfering with sending HTTP requests to port 80 by buffering all the output until the last "\ r \ n \ r \ n" token was noticed, regardless of how the OS tried to process outgoing TCP traffic . At first I had to check it, but I have been using this antivirus program for so many years and have never encountered this problem before, so I did not even think about it. Everything works as before when I turned off the antivirus.

+2
source

TcpClient.NoDelay does not mean that byte blocks will not be aggregated into one packet. This means that byte blocks will not be delayed to aggregate into one packet.

If you want to force the package boundary, use Stream.Flush .

+3
source

MSDN docs show the setting of TcpClient.NoDelay = true, not the TcpClient.Client.NoDelay property. Have you tried this?

+1
source

The test code is fine (I assume you are sending valid HTTP). That you should check why the TCP server behaves badly when reading from a TCP connection. TCP is a stream protocol - this means that you cannot make any assumptions about the size of the data packets unless you explicitly specify these sizes in your data protocol. For example, you can prefix all your data packets using a fixed size prefix (2 bytes) that will contain the size of the data that will be received.

When reading HTTP, usually reading is done in several stages: reading the HTTP line, reading the HTTP headers, reading the HTTP content (if applicable). The first two parts do not have any size specifications, but they have a special delimiter (CRLF).

There is information on how HTTP can be read and parsed.

0
source

Source: https://habr.com/ru/post/1389749/


All Articles