TL DR: comparing the speed of code variants with just one input size is pointless; a comparison of empirical orders of growth really reflects the algorithmic nature of the code and will be consistent on different test platforms, for the same range of measurements of input sizes. Comparison of absolute speed values ββmakes sense only for code variants that exhibit the same asymptotic or at least local growth patterns.
It is not enough to measure the speed of your two implementations with only one input size. Usually, several data points are required to estimate the runtime of the empirical growth orders of our code (since the code can be run with different input sizes), It is defined as the logarithm of the runtime relationship, based on the ratio of input sizes.
Thus, even if at some input code_1 is 10 times faster than code_2 , but its start time doubles with each doubling of the input size, while for code_2 it grows only as 1.1x, very soon code_2 will become much faster than code_1 .
Thus, a real measure of the effectiveness of an algorithm is its runtime complexity (and the complexity of its space, i.e. memory requirements). And when we measure it empirically, we only measure if for a specific code at hand (in a certain range of input sizes), and not for the algorithm itself, i.e. Its perfect implementation.
In particular, the theoretical complexity of the test division is O(n^1.5 / (log n)^0.5) , in n primes, usually regarded as the empirical order of growth ~ n^1.40..1.45 (but it can be ~n^1.3 initially, for smaller input sizes). For the Eratosthenes sieve, it is O(n log n log (log n)) , usually considered as ~ n^1.1..1.2 . But, of course, there are suboptimal implementations of both trial fission and the sieve of Eratosthenes, which work at ~n^2.0 and worse.
So no , that doesn't prove anything. One datapoint does not make sense, at least three are needed to get a "big picture", that is, in order to predict with certainty the runtime of & frasl; necessary for large input sizes.
Forecasting with known certainty is what the scientific method speaks of.
By the way, your work time is very long. The calculation of 10,000 primes should be almost instantaneous, which is much less than 1 / 100th of a second to run the program on a quick box. Perhaps you are also measuring print time. Not.:)