Sending a large file using HttpWebRequest, increasing / decreasing the buffer as needed

I am writing an application that uploads large files to a web service using HttpWebRequest.

This application will be launched by different people with different internet speeds.

I asynchronously read the file in chunks and asynchronously write these fragments to the request stream. I do this in a loop using callbacks. And I keep doing this until the whole file is sent.

The download speed is calculated between recordings, and the GUI is subsequently updated to show the indicated speed.

The problem I ran into is choosing a buffer size. If I make it too large, users with slow connections will not see frequent speed updates. If I make it too small, users with fast connections will “clog” the read / write methods that cause the processor to load.

Now I run the buffer at 128 KB, and then every 10 records I check the average write speed of these 10 records, and if it is under the second, I increase the buffer size by 128 KB. I also compress the buffer in the same way if the write speed drops below 5 seconds.

This works pretty well, but it all seems very arbitrary, and there seems to be room for improvement. My question is: did anyone deal with a similar situation and what course of action did you take?

thanks

+6
source share
1 answer

I think this is a good approach. I also used in a large file upload. But that was a little trick. I determined the connection speed in the first request by placing a call on my other service. This will actually save overhead in recalculating the speed with each request. The main reason for this was

  • In a slow connection, the speed usually fluctuates very much. Thus, recalculating it of each request does not make sense.

  • I had to provide a renewal facility where the user could re-download the file from where it last last been.

Given scalability, I used to fix the buffer with the first request. Let me know if it helped.

0
source

Source: https://habr.com/ru/post/917275/


All Articles