MPI implementation: Can MPI_Recv receive messages from many MPI_Send?

Now I'm trying to use MPI_Sendand MPI_Recv to convey the best solutions found among several processes. The best solutions found in each process should go to the management process, which stores all the best solutions and, if necessary, is sent to other processes. My question is how to implement it? For example, as soon as process 1 finds a new one, it can call MPI_Send and send it to the control process. Is there a way for the monitoring process to detect that there is a message to receive? Is MPI_Send MPI_Recv required? We look forward to hearing from you experts. Thanks!

Thank you for your advice. What I'm going to do is let multiple workflows send messages to the same management process. Workflows decide when to send. The management process should determine when to receive. Can MPI_Proble do this?

+4
source share
3 answers

Yes, MPI_RECV can specify MPI_ANY_SOURCE as the rank of the message source so you can do what you want.

+11
source

MPI_Recv can use MPI_ANY_SOURCE as a way to receive a message from any other rank.

Depending on the workload and nature of the management process, you can keep control in your code and from time to time enter the MPI library. In this case, MPI_IRecv on MPI_ANY_SOURCE and MPI_Test can be a good way to continue.

If you need some kind of processing based on the contents of the message, MPI_Probe or MPI_IProbe allow you to check the message header before the message is truly MPI_Recv'd. For example, MPI_Probe allows you to determine the size of a message and create a buffer of the appropriate size.

In addition, if all work series will sometimes reach the β€œbarrier” point when better solutions need to be tested, the collective operation MPI_Gather / MPI_Bcast can also be performed.

Keep in mind that ranks that go into long computational phases sometimes interfere with good message distribution. If there is an extended computational phase, it may be useful to ensure that all MPI messages are delivered before this stage. This becomes more important as there is an RDMA type interconnect that is used in the cluster. MPI_Barrier ensures that all ranks go into MPI_Barrier before any MPI ranks can return from the MPI_Barrier call.

+5
source

Take a look at MPI_Probe .

+3
source

Source: https://habr.com/ru/post/1302396/


All Articles