MPI_Recv can use MPI_ANY_SOURCE as a way to receive a message from any other rank.
Depending on the workload and nature of the management process, you can keep control in your code and from time to time enter the MPI library. In this case, MPI_IRecv on MPI_ANY_SOURCE and MPI_Test can be a good way to continue.
If you need some kind of processing based on the contents of the message, MPI_Probe or MPI_IProbe allow you to check the message header before the message is truly MPI_Recv'd. For example, MPI_Probe allows you to determine the size of a message and create a buffer of the appropriate size.
In addition, if all work series will sometimes reach the βbarrierβ point when better solutions need to be tested, the collective operation MPI_Gather / MPI_Bcast can also be performed.
Keep in mind that ranks that go into long computational phases sometimes interfere with good message distribution. If there is an extended computational phase, it may be useful to ensure that all MPI messages are delivered before this stage. This becomes more important as there is an RDMA type interconnect that is used in the cluster. MPI_Barrier ensures that all ranks go into MPI_Barrier before any MPI ranks can return from the MPI_Barrier call.
source share