Why is HttpWebResponse losing data?

In another question, people get incomplete data when reading from HttpWebResponse via GetResponseStream ().

I also ran into this problem when reading data from an embedded device, which should send me a configuration of 1000 inputs, a total of 32 bytes and 64 bytes * 1000, which leads to 64032 bytes of data.

Reading the response stream directly gives me data only for the first 61 and a half inputs, from there only zeros.

Version a) Does not work:

int headerSize = 32; int inputSize = 64; byte[] buffer = new byte[(inputSize*1000) + headerSize]; HttpWebResponse response = (HttpWebResponse)request.GetResponse(); using (Stream stream = response.GetResponseStream()) { if (stream != null) { stream.Seek(0, SeekOrigin.Begin); stream.Read(buffer, 0, buffer.Length); } } response.Close(); return buffer; 

To visualize the problem, I printed 64 bytes for each input configuration separately. It consists mainly of 40 ascii characters and several bytes, which are logical and integer values.

Version A) Output:

 1/1000 | 46656E7374657220576F686E656E2020202020202020202020202020202020202020202020202020000000000F0EB0AA00008100000001800000100090010020 2/1000 | 42574D20576F686E656E202020202020202020202020202020202020202020202020202020202020000000000F0EB0AA00008100000001800000100091010080 … 61/1000 | 53656E736F72203631202020202020202020202020202020202020202020202020202020202020200000000000000000000010003300000000001000C3010000 62/1000 | 53656E736F7220363220202020202020202020202020202020202020202020200000000000000000000000000000000000000000000000000000000000000000 63/1000 | 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 … 999/1000 | 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 1000/1000 | 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 

When I copy ResponseStream to a new MemoryStream, I can fully read all 1000 inputs without any corrupted bytes.

Version B) Works fine:

(see also https://stackoverflow.com/a/3188269/) , which fixed my problem in the first case)

 int headerSize = 32; int inputSize = 64; byte[] buffer = new byte[(inputSize*1000) + headerSize]; HttpWebResponse response = (HttpWebResponse)request.GetResponse(); using (Stream stream = response.GetResponseStream()) { if (stream != null) { MemoryStream memStream = new MemoryStream(); stream.CopyTo(memStream); memStream.Flush(); stream.Close(); memStream.Seek(0, SeekOrigin.Begin); memStream.Read(buffer, 0, buffer.Length); memStream.Close(); } } response.Close(); return buffer; 

Version B) Exit

 1/1000 | 46656E7374657220576F686E656E2020202020202020202020202020202020202020202020202020000000000F0EB0AA00008100000001800000100090010020 2/1000 | 42574D20576F686E656E202020202020202020202020202020202020202020202020202020202020000000000F0EB0AA00008100000001800000100091010080 … 61/1000 | 53656E736F72203631202020202020202020202020202020202020202020202020202020202020200000000000000000000010003300000000001000C3010000 62/1000 | 53656E736F72203632202020202020202020202020202020202020202020202020202020202020200000000000000000000010003300000000001000C3010000 63/1000 | 53656E736F72203633202020202020202020202020202020202020202020202020202020202020200000000000000000000010003300000000001000C3010000 … 999/1000 | 53656E736F7220393939202020202020202020202020202020202020202020202020202020202020000000000000000000001000DA030000000010006A050000 1000/1000 | 53656E736F7220313030302020202020202020202020202020202020202020202020202020202020000000000000000000001000DB030000000010006B050000 

From a technical point of view: Why does HttpWebResponse lose data when directly accessed? I do not just want it to work, but I want to understand why version does not work, and version b is successful, while both depend on the same data source (response.GetResponseStream ()). What happens in the hood in this case?

Thanks for your efforts!

+5
source share
1 answer

Check the int returned by Stream.Read as described by docs :

This may be less than the number of bytes requested if this number of bytes is currently unavailable, or zero (0) if the end of the stream has been reached.

I bet that only part of the thread returns on the first call.

If you name Stream.Read several times, you will get all the bytes at the end. An HTTP stream just loads slower than your code works - it does not have time to complete before you call Read .

Using CopyTo with a MemoryStream , the call is blocked until the entire stream has been read. A wrapper in StreamReader , then calling ReadToEnd will have the same successful result.

+2
source

Source: https://habr.com/ru/post/1258747/


All Articles