MPI: common variable value for all processors

Here is one question about MPI. I need two processors that support the modification of one variable, and I want both processors to have access to the variable with the most recent value.

from mpi4py import MPI from time import sleep comm = MPI.COMM_WORLD rank = comm.rank assert comm.size == 2 msg = 0 sec = 10 if comm.rank == 0: for i in range(sec): print msg sleep(1) msg = comm.bcast(msg,root = 1) else: for i in range(sec*2): msg += 1 sleep(0.5) comm.bcast(msg,root = 1) 

So, I expect the program to print something like: 0 2 4 ...

But the program is obtained for printing: 0 1 2 3 4 5 6 7 8 9

I am wondering if there is a mechanism in mpi4py so that the msg variable is used by both processors? That is, whenever msg is modified by processor 1, the new value becomes immediately available to processor 0. In other words, I want processor 0 to access the latest msg value instead of waiting for all the changes made to msg by processor 1.

+4
source share
2 answers

I think you are confused about how distributed memory programming works. In MPI, each process (or rank) has its own memory, and therefore, when it changes values โ€‹โ€‹through load / store operations (for example, what you do with msg + = 1), this will not affect the value of the variable on the other process. The only way to update deleted values โ€‹โ€‹is to send messages that you execute with a call to comm.bcast (). This sends the local msg value from rank 1 to all other ranks. Until this moment, there is no rank 0 to know what is happening in rank 1.

If you want to have common values โ€‹โ€‹between processes, you probably need to look at something else, maybe threads. You will lose the distributed capabilities of MPI if you switch to OpenMP, but perhaps this is not what you need MPI for. There are ways to do this using distributed memory models (such as PGAS languages โ€‹โ€‹such as Unified Parallel C, Global Arrays, etc.), but you will always run into a lag problem, which means there will be some time when the values in ranks 0 and 1 are not synchronized unless you have any protection to enforce it.

+3
source

As Wesley Bland mentioned, this is really not possible in an environment with pure distributed memory, since memory is not shared.

However, MPI has for some time (since 1997) allowed something like this in MPI-2 as a one-way communication; they are significantly updated in MPI-3 (2012). This approach can have real benefits, but you need to be careful; since the memory is not actually shared, each update requires expensive communications and it is easy to accidentally place significant bottlenecks for scalability / performance in your code due to over-dependence on the general state.

Using MPI-2 contains an example implementation of a counter using MPI-2 one-way communication; a simple version of this counter is described and implemented in this answer in C. In the mpi4py distribution under "demos" there are implementations of the same counters in the 'nxtval' demo; the same simple counter as nxtval-onesided.py , and a more complex but more scalable implementation, also described in the book Using MPI-2, as nxtval-scalable.py . You should be able to use any of these implementations more or less as is in the above code.

+2
source

Source: https://habr.com/ru/post/1490376/


All Articles