This is the code that I use to test the web server in an embedded product that felt bad when an HTTP request arrives fragmented through several TCP packets:
TcpClient client_sensor = new TcpClient(NAME_MODULE, 80); client_sensor.Client.NoDelay = true; client_sensor.Client.SendBufferSize = size_chunk; for (int j = 0; j < TEST_HEADERS.Length; j += size_chunk) { String request_fragment = TEST_HEADERS.Substring(j, (TEST_HEADERS.Length < j + size_chunk) ? (TEST_HEADERS.Length - j) : size_chunk); client_sensor.Client.Send(Encoding.ASCII.GetBytes(request_fragment)); client_sensor.GetStream().Flush(); }
Looking at Wireshark, I see only one TCP packet that contains the entire test header, and not the expected header length / chunk size packets. I used NoDelay to turn off the Nagle algorithm earlier, and it usually works the way I expect. The online documentation for NoDelay at http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.nodelay%28v=vs.90%29.aspx specifically states: "Sends data immediately after call NetworkStream.Write "in the associated code sample, so I think I used it correctly all the time.
This happens regardless of whether I execute the code. Is the .NET runtime an optimization of my packet fragmentation?
I am running x64 Windows 7, .NET Framework 3.5, Visual Studio 2010.
source share