As soon as you use the bandwidth available only for your program

I am making a program that will load a bunch of different items. My language has cheap concurrency, so I thought at first that I could download them all at once. The problem with using concurrency that you don't have is bad . If I tried to download them all at once, the user would have to wait for them all before receiving them.

Let's say that you download 10 items that can be downloaded at a speed of 7 Mbps, and you have a download speed of 20 Mbps. The program should start downloading only the first three elements and begin to load new items only after the old elements are over and there will be a bandwidth. Also note that, in general, items will not have the same download speed.

If I had a programmatic way to check network saturation, it would be simple (just check if it is saturated before new threads appear.)

+6
source share
1 answer

As noted in the comments, you cannot do it well enough to make any warranties. But suppose you want to do your best, anyway.

This problem has two parts:

  • Determine the available bandwidth.
  • Bandwidth management

Proper bandwidth management can be done in the user space program by limiting the speed at which you read the socket. The TCP / IP stack will notify the other end of the connection that is queued on your application on behalf of your application and it will no longer be sent. A convenient way to implement this speed limit is in token tokens .

Fast implementation of slave token:

int bucket = 0; start_thread({ while(transfer_in_progress) { bucket += bytes_per_second_limit; sleep(1); }); while(transfer_in_progress) { bytesread = read(socket, buffer, min(bucket, buffersize), ); bucket -= bytesread; } 

If bytes_per_second_limit set to approximately the bandwidth expressed in bytes / second, then this should be read as fast as the connection allows. If the connection is faster, you will be limited bytes_per_second_limit . If the connection is slower, then the bucket will grow forever with a speed proportional to the difference between the speed limit and the available bandwidth.

Hm!

If you start another thread and watch the bucket , you can observe two conditions:

  • If the bucket always 0, then there is more bandwidth available and you can increase bytes_per_second_limit , perhaps by your last best guess for the available bandwidth (from # 2). Or run an additional download.
  • If the bucket larger than the last time you looked, and the last few seconds of the data points indicate continued growth (possibly linear regression, whatever you want), the growth rate is expressed in bytes / second, as far as you can reduce bytes_per_second_limit to match download speed with available bandwidth.

The problem is that there is no guarantee that your bandwidth will remain constant. Monitoring bucket threads can bounce back and forth between speed increase and limitation. I suggest starting with averaging at least 10 or 20 seconds before making changes to the speed limit.

+1
source

Source: https://habr.com/ru/post/985416/


All Articles