I am writing an application that uploads large files to a web service using HttpWebRequest.
This application will be launched by different people with different internet speeds.
I asynchronously read the file in chunks and asynchronously write these fragments to the request stream. I do this in a loop using callbacks. And I keep doing this until the whole file is sent.
The download speed is calculated between recordings, and the GUI is subsequently updated to show the indicated speed.
The problem I ran into is choosing a buffer size. If I make it too large, users with slow connections will not see frequent speed updates. If I make it too small, users with fast connections will “clog” the read / write methods that cause the processor to load.
Now I run the buffer at 128 KB, and then every 10 records I check the average write speed of these 10 records, and if it is under the second, I increase the buffer size by 128 KB. I also compress the buffer in the same way if the write speed drops below 5 seconds.
This works pretty well, but it all seems very arbitrary, and there seems to be room for improvement. My question is: did anyone deal with a similar situation and what course of action did you take?
thanks
source share