Edit (to edited question):
In none of the code snippets that you inserted into the question, I see that any thread offset is set.
I think you are wrong in calculating the bytes to read compared to the bytes received. This protocol may seem funny (why don't you get fewer bytes than requested?), But it makes sense when you consider that you can read from a high-cost packet source (I think: network sockets).
Perhaps you will get 6 characters in one packet (from the TCP packet) and get only the next 4 characters in the next reading (when the next packet arrived).
Edit In response to your related example from the comment:
using (GZipStream stream = new GZipStream(new MemoryStream(gzip), CompressionMode.Decompress)) {
It seems that encoders are using prior knowledge of the underlying stream implementation, this stream.Read will always return 0 OR the requested size. It seems to me that this is a risky bet. But if the docs for GZipStream
talk about it, that might be fine. However, since the common variable Stream
used in MSDN samples, it (the path) is more correct to check the exact number of bytes read.
The first related example uses a MemoryStream in both Write and Read mode. The reset position is between them, so the data that was written first will be read:
Stream s = new MemoryStream(); for (int i = 0; i < 100; i++) { s.WriteByte((byte)i); } s.Position = 0;
The second related example does not set the position of the stream. You usually saw Seek
challenge if he did. Perhaps you confuse offsets in the data buffer with the stream position?
source share