I am trying to use a mpi_f08halo exchange module in a series of rank 4, 5 and 6 arrays. I used to use subarray types for this, but in the end there were so many that ifort was unable to track them all and began to decompose them when compiling with -ipo.
I use code line by line
call MPI_Isend(Array(1:kthird, ksizex_l, 1:ksizey_l, 1:ksizet_l, 1:size5, 1:size6), size, MPI_Double_Complex, ip_xup, 0 + tag_offset, comm, reqs(1))
call MPI_Irecv(Array(1:kthird, 0, 1:ksizey_l, 1:ksizet_l, 1:size5, 1:size6), size, MPI_Double_Complex, ip_xdn, 0 + tag_offset, comm, reqs(2))
(and then later call MPI_WaitAll)
ifort 2017 with Intel MPI 2017 gives the following warning for each such line:
test_mpif08.F90(51): warning
Despite this, the halo exchange works fine for ordinary 4 and -5 arrays. However, when it comes to rank 6 arrays, the array data comes in and out from completely wrong places, with data from the halo in the sending process (which was not in the segment of the array transferred to MPI_Isend) that appears in the scope of the receiving process (which was not transmitted in MPI_Irecv)
Using ifort 2018 and previewing Intel MPI 2019 gives an additional error (not a warning):
test_halo_6_aio.F90(60): warning
call MPI_Isend(Array(1:kthird, ksizex_l, 1:ksizey_l, 1:ksizet_l, 1:size5, 1:size6), size, MPI_Double_Complex, ip_xup, 0 + tag_offset, comm, reqs(1))
-------------------^
test_halo_6_aio.F90(60): error
call MPI_Isend(Array(1:kthird, ksizex_l, 1:ksizey_l, 1:ksizet_l, 1:size5, 1:size6), size, MPI_Double_Complex, ip_xup, 0 + tag_offset, comm, reqs(1))
^
Three interrelated questions:
- Is there anything wrong with my syntax when calling
MPI_Isendand MPI_Irecvcausing warnings? How can I fix this so that warnings do not work anymore? - Is this a warning cause the distortion of the array that I see using the ranked 6th array?
- -6?
.