Derived Data Types with MPI

I am learning Fortran's BCASTing data types and have code that takes two values ​​from the terminal and displays them in each process. For a combination of value1 / value2 of type integer / integer and integer / real, this works, but for a combination of integer / real * 8 this fails.

Code:

use mpi implicit none integer :: ierror, pid, ncpu, root = 0 integer :: counts, newtype, extent integer, dimension(2) :: oldtypes, blockcounts, offsets type value integer :: value1 = 0 real*8 :: value2 end type type (value) input call MPI_INIT(ierror) call MPI_COMM_RANK(MPI_COMM_WORLD, pid, ierror) call MPI_COMM_SIZE(MPI_COMM_WORLD, ncpu, ierror) ! setup of 1 MPI_INTEGER field: value1 offsets(1) = 0 oldtypes(1) = MPI_INTEGER blockcounts(1) = 1 ! setup of 1 MPI_REAL8 field: value2 call MPI_TYPE_EXTENT(MPI_INTEGER, extent, ierror) !determine offset of MPI_INTEGER offsets(2) = blockcounts(1)*extent !offset is 1 MPI_INTEGER extents oldtypes(2) = MPI_REAL8 blockcounts(2) = 1 ! define struct type and commit counts = 2 !for MPI_INTEGER + MPI_REAL8 call MPI_TYPE_STRUCT(counts, blockcounts, offsets, & oldtypes, newtype, ierror) call MPI_TYPE_COMMIT(newtype, ierror) do while (input%value1 >= 0) if (pid == root) then read(*,*) input write(*,*) 'input was: ', input end if call MPI_BCAST(input, 1, newtype, & root, MPI_COMM_WORLD, ierror) write(*,*), 'process ', pid, 'received: ', input end do call MPI_TYPE_FREE(newtype, ierror) call MPI_FINALIZE(ierror) 

You can verify that the integer / integer and integer / real work fine by modifying the corresponding declaration and oldtype. The integer / real * 8 combination fails, for example. inputs -1 2.0:

 input was: -1 2.0000000000000000 process 0 received: -1 2.0000000000000000 process 1 received: -1 0.0000000000000000 process 2 received: -1 0.0000000000000000 process 3 received: -1 0.0000000000000000 

This thread with a similar problem assumes that the use of MPI_TYPE_EXTENT is incorrect, as an additional add-on may be added that is not taken into account. Unfortunately, I could not solve the problem and I hope that someone here can enlighten me.

thanks in advance

+6
source share
1 answer

You have a basic idea: you created a structure, but you assume that the double precision value is stored immediately after the integer value and, as a rule, is incorrect. Hristo replied that you are referring to a good answer in C.

The problem is that the compiler will usually align your data structure fields for you. Most systems can read / write values ​​that are aligned in memory much faster than they can perform non-aligned calls, if they can execute them at all. As a rule, the requirement is that the alignment is carried out according to the size of the elements; this is an 8-byte double-precision number that must be aligned with 8-byte boundaries (that is, the address of its first byte is zero modulo 8), while an integer should be only 4-byte. This almost certainly means that there are 4 bytes between the integer and the double.

In many cases, you can persuade the compiler to mitigate this behavior - in fortran, you can also use the sequence keyword to require that data be stored contiguously. In any case, in terms of performance (which is why you use Fortran and MPI, it is assumed that this almost never works, but may be useful for byte-byte compatibility with other external data types or formats.

Given the likely padding introduced for performance reasons, you can assume alignment and hardcode in your program; but probably this is not good either; if you add other fields or change the real number type to a 4-byte single precision number, etc., your code will be wrong again. It is best to use MPI_Get_address to explicitly find locations and independently calculate the correct offsets:

 integer(kind=MPI_Address_kind) :: startloc, endloc integer :: counts, newtype integer, dimension(2) :: oldtypes, blockcounts, offsets type value integer :: value1 = 0 double precision :: value2 end type type (value) :: input !... ! setup of 1 MPI_INTEGER field: value1 call MPI_Get_address(input, startloc, ierror) oldtypes(1) = MPI_INTEGER blockcounts(1) = 1 call MPI_Get_address(input%value1, endloc, ierror) offsets(1) = endloc - startloc oldtypes(2) = MPI_DOUBLE_PRECISION blockcounts(2) = 1 call MPI_Get_address(input%value2, endloc, ierror) offsets(2) = endloc - startloc if (pid == 0) then print *,'offsets are: ', offsets endif 

Please note that if you had an array of such derived types in order to cover the case of filling between the last element of one element and the beginning of the next, you should also explicitly measure this and set the total size of the type - the offset between the beginning of one element of this type and the beginning of the next - with MPI_Type_create_resized .

+7
source

Source: https://habr.com/ru/post/970977/


All Articles