I am performing a performance / load testing service. Imagine a test function as follows:
bytesPerSecond = test(filesize: 10MB, concurrency: 5)
Using this, I will fill out a table of results for different sizes and levels of concurrency. There are other variables, but you get the idea.
The test feature combines concurrency requests and track bandwidth. This speed starts from zero, then bursts and dips, until it eventually stabilizes at the βtrueβ value.
However, it may take some time to achieve this stability, and there are many input combinations to evaluate.
How can the test function decide when it has done enough fetching? Enough, I suppose, I mean that the result will not change outside the field if testing continues.
I remember how a while ago I read an article about this (from one of the authors of jsperf), which discussed a reliable method, but I can no longer find the article.
One simple method would be to calculate the standard deviation from a sliding window of values. Is there a better approach?
source share