I created a script in JavaScript that is injected into our Ext JS application during automatic browser testing. The script measures the time taken to load data in our grids.
In particular, the script polls each grid, looks to see if there is a first line or the message “no data”, and as soon as all grids satisfy this condition, the script writes the value between Date.now () and performance.timing.fetchStart, and treats it as page load time.
This script works more or less, as expected, however, compared to the measured human timings (Google ftw stopwatch), the time indicated in this test is constantly about 300 milliseconds longer than when measured by a stopwatch.
My questions are as follows:
- Is there a hole in this logic that would lead to incorrect results?
- Are there alternative and accurate ways to achieve this dimension?
The script looks like this:
function loadPoll() { var i, duration, dataRow = '.firstRow', noDataRow = '.noData', grids = ['.grid1', '.grid2', '.grid3','.grid4', 'grid5', 'grid6', 'grid7']; for (i = 0; i < grids.length; ++i) { var data = grids[i] + ' ' + dataRow, noData = grids[i] + ' ' + noDataRow; if (!(document.querySelector(data) || document.querySelector(noData))) { window.setTimeout(loadPoll, 100); return; } } duration = Date.now() - performance.timing.fetchStart; window.loadTime = duration; } loadPoll();
Some considerations:
Although I know that a person’s response time can be slow, I’m sure that 300-millisecond inconsistency is not introduced by a person using the Google stopwatch.
When looking at the code, it may seem that polling several elements can lead to 300 ms inconsistency, however, when I change the number of controlled elements from 7 to 1, there still seems to be an excess of 300 ms for the time the automated test reported.
Our automated tests are performed within the framework controlled by Selenium and Protractor.
Thanks in advance if you are able to imagine it!
source share