I have an interpreter written in Java. I am trying to check the results of various optimizations in the interpreter. To do this, I parse the code, and then re-run the interpreter on the code, this continues until I get 5 runs that differ by a very small margin (0.1 s times lower), the average value is taken and printed. There is no I / O or randomness in the interpreter. If I run the interpreter again, I get different runtimes:
91.8s
95.7s
93.8s
97.6s
94.6s
94.6s
107.4s
I did not try to use server and client VMs, serial and parallel gc, large tables and windows and linux. They are on the 1.6.0_14 JVM. The computer does not have processes running in the background. So I ask what can cause these big variations, or how can I find out what it is?
The actual problem was caused by the fact that the program had to sort out the solution with a fixed point, and the values were stored in a hash set. The hashed values differed between runs, which led to a different order, which in turn led to a change in the number of iterations needed to reach a solution.
source
share