First, letβs give you a few prerequisites: I have some kind of research code that performs Monte Carlo simulations, whatβs important is what it takes to iterate through a set of objects, calculate a series of vectors from their surface, and then for each vector I iterate through collect objects again to see if the vector falls into another object (similar to ray tracing). The pseudocode will look something like this.
for each object { for a number of vectors { do some computations for each object { check if vector intersects } } }
Since the number of objects can be quite large and the number of rays even larger, I thought it would be wise to optimize how I repeat the collection of objects. I created some test code that tests arrays, lists, and vectors, and for my first test cases, I found that vector iterators were about twice as fast as arrays, however, when I implemented the vector in my code, it was slightly slower than the array I used before.
So, I went back to the test code and increased the complexity of the object function that each loop called (a dummy function equivalent to "check if the vector intersects"), and I found that when the complexity of the function increases the execution time, the gap between arrays and vectors decreases to those until eventually the array becomes faster.
Does anyone know why this is happening? It seems strange that the runtime inside the loop should affect the runtime of the outer loop.
source share