I think it depends on the properties of your PRNG. The usual advantages of PRNG are lower entropy in the lower bits and lower entropy for the first values โโof n . Therefore, I think that you should check your PRNG for such weaknesses and change your code accordingly.
Some of the diehard tests may provide useful information, but you can also check the first values โโof n and their statistical properties, such as the sum and, and compare them with the expected values.
For example, run PRNG and sum the first 100 values โโmodulo 11 of your PRNG, repeat this R times. If the total amount is very different from the expected (5 * 100 * R), your PRNG suffers from one or two of the disadvantages mentioned above.
Unaware of PRNG, I would feel safer using something like this:
prng[0].initTimer(); // Throw the first 100 values away for(int i=1; i < 100; i++) prng[0].getRandNum(); // Use only higher bits for seed values (assuming 32 bit size) for(int i=1; i<numRNGs; i++) prng[i].init(((prng[0].getRandNum() >> 16) << 16) + (prng[0].getRandNum() >> 16));
But, of course, this is speculation about PRNG. With perfect PRNG, your approach should work just as it is.
source share