Moon background
I am working on parallelizing some code for simulating cardiological electrophysiology. Since users can define their own simulations using the built-in scripting language, I have no way of knowing how to manage the tradeoff of communication or computation. To deal with this, I make a kind of runtime profiler that will decide how to handle the domain decomposition as soon as it sees that the simulation is running, and the hardware environment with which it should work.
My question is this:
How does MPI I / O work behind the scenes? Is each process actually written to one file on some other node, or is each process written to some kind of sparse file that will merge together when the file is closed?
Knowing this will help me decide whether I / O operations should be considered as communications or calculations, and adjust the balance accordingly ...
Thanks in advance for any information you can offer.
Ross
source
share