Programmatically determine the maximum transfer rate

I have a problem that requires me to calculate the maximum load and load, and then limit the use of my program to a percentage of it. However, I cannot come up with a good way to find the maximum values.

Currently, the only solution I can come up with is transferring a few megabytes between the client and the server, and then measuring how the transfer occurs. However, this solution is very undesirable, because with 100,000 clients this can lead to too much increase in the bandwidth usage of our server (which is already too large).

Does anyone have a solution to this problem?

Please note that what interests me most is the restriction of data transfer until it leaves the Internet provider network; I think this is likely to cause a bottleneck that will cause a deterioration in the communication of other programs. Correct me if I am wrong.

EDIT: After further investigation, I do not think this is possible; too many variables to accurately measure the maximum transfer rate when exiting the Internet service provider's network. If you leave the question open, in case someone comes up with an exact solution.

+4
source share
5 answers

If you can restrict the code to Windows Vista or newer (unlikely, but who knows?), You can use SetPerTcpConnectionEStats and GetPerTcpConnectionEStats along with TCP_ESTATS_BANDWIDTH_RW_v0 to have Windows estimate the bandwidth for the connection, and then retrieve this estimate. Then, based on this estimate, you can throttle the bandwidth you use.

So what will happen is that you will start to run the application the same way you do now, collect statistics for a while, and then enter throttling based on what you measure during this initial period of time.

This has the advantage of avoiding sending additional data just to collect bandwidth information - it simply collects statistics on the data being sent anyway. This flaw (which I suspect is almost inevitable) is that it still uses something approaching full bandwidth until you get an estimate of the available bandwidth (and, as mentioned above, this was added on Windows Vista, so it doesn’t even come close to public access yet).

+2
source

If you have Windows devices on both ends of the connections, you can use Background Intelligent Transfer Tool (BITS) to move information and the computer out of the whole bandwidth issue. The (almost) always installed component is described in http://msdn.microsoft.com/en-us/library/aa362708(VS.85).aspx .

You are not saying whether bandwidth usability is required or just a cost problem, so this may not be acceptable.

+1
source

The only answers I see are the following:

  • Use a small sample in time. transmission speed.
  • Time the actual data in pieces (say 1k) and report the average value.

Some of the questions that complicate the question are:

  • The processor bandwidth of the sending machine (i.e., other running tasks).
  • Network traffic density.
  • Tasks performed on the client machine.
  • The architecture of all cars.

Since the client can perform other tasks, and different tasks will be performed on the host (sender), the transmission speed will be different.

I vote for sending a piece of data, synchronizing it, sending another and synchronizing it. Accumulate these durations and average over the number of pieces. This allows you to use a dynamic time that would be more accurate than any pre-calculated time.

0
source

If the problem is related to raw bandwidth, a feedback mechanism may work here. When you start a session, the server tells the client that it will send data. The client can track the data transfer rate. If the speed for the received data is less than the data transfer rate (you can use the threshold here, for example, 90% lower or lower), the client notifies the server of a decrease in the data transfer rate and starts the process again. This will serve as the main QoS mechanism.

If the problem is that the connection has a high latency and / or jitter, try sending the information in smaller packets (actual IP / TCP packets). Typically, the system will try to use the maximum packet size, but packet fragmentation on the Internet can and does delay traffic. If this does not improve latency yet, you can opt out of using UDP instead of TCP. But this will not provide data delivery.

0
source

One option would be to implement something like the UTorrent UDP transport protocol between client and server in order to maintain latency. Simply measuring the raw bandwidth will not help when any other process also begins to use bandwidth, reducing the amount of free bandwidth.

0
source

Source: https://habr.com/ru/post/1309127/


All Articles