The 3rd column is larger than the second for the considered array sizes.
The designation "Big O" tells you how time increases with the size of the input.
Your time (or should be)
A + B*N^2 for the quadratic case, C + D*N*LOG(N) for the linearithmic case.
But it is quite possible that C is much larger than A, which leads to a higher execution time for linear-limit code , when N is quite small .
What makes linearity interesting is that if your input has increased from 9600 to 19200 (doubling), the runtime should be about a quarter, and for a quadratic algorithm, about eight seconds, while the algorithm with a linear label should be slightly larger than double its execution time.
Thus, the runtime coefficient will go from 2: 8 to 8:16, i.e. the quadratic algorithm is now only twice as fast.
Double the size of the input again and 8:16 will become 32:32; these two algorithms are equally fast when they encounter an input of about 40,000.
When deciding on the size of the input file, 80,000, the ratio is the opposite: four times 32 is equal to 128, while two times 32 is only 64. 128: 64 means that the line-labeled algorithm is now twice as fast as the other.
You should run tests with very different sizes, possibly N, 2 * N and 4 * N, in order to better evaluate your constants A, B, C and D.
It all boils down to not blindly relying on Big O's classification. Use it if you expect your input to become big; but for small inputs it is entirely possible that a less scalable algorithm is more efficient.
For instance,
you see that for smaller input sizes, a faster algorithm is one that works in exponential time, which is hundreds of times faster than the logarithmic one. But as soon as the input size increases by nine, the time of the exponential algorithm grows, and the other does not.
Perhaps you even decide to implement both versions of the algorithm and use one or the other depending on the size of the input. There are some recursive algorithms that do just that, and switch to iterative implementations for the latest iterations. In the case shown, you can implement the best algorithm for each size range; but the best compromise comes with only two algorithms, quadratic to N = 15 and then switches to logarithmic.
I found here a link to Introsort, which
is a sorting algorithm that initially uses Quicksort, but switches to Heapsort when the recursion depth exceeds a level based on the logarithm of the number of sorted elements and uses Insertion to sort for small cases due to its good locality, i.e. when the data is most likely in memory and easily referenced.
In the above case, Insertion sorting uses memory locality, which means that its constant B is very small; a recursive algorithm is likely to have higher costs and significant C value. Therefore, for smaller datasets, more compact algorithms succeed, even if their Big O classification is worse.