No, Python will not magically parallelize this for you. In fact, he cannot, because he cannot prove the independence of the records; which will require a lot of verification / verification of the program, which is impossible to do in the general case.
If you need fast coarse-grained multi-core parallelism, I recommend joblib :
from joblib import delayed, Parallel values = Parallel(n_jobs=NUM_CPUS)(delayed(f)(x) for x in range(1000))
I not only witnessed almost linear accelerations using this library, but also have an excellent characteristic of signals such as one of Ctrl-C on its work processes, which cannot be said about all multiprocessor libraries.
Note that joblib does not actually support parallelism shared memory: it spawns worker processes, not threads, so it imposes some overhead on sending data to workers and the results back to the main process.
source share