How to use the data type obtained from MPI for a 3D array?

I want to write parallel code that works on a three-dimensional matrix, where each process has its own submatrix, but to do their job they need some information about the submatrix of neighboring processes (only boundary planes). I send this information from a communication point of view, but I know that for a large matrix this is not a good idea, so I decided to use derived data types for communication. I have a problem with mpi_type_vector : for example, I have a matrix NX*NY*NZ , and I want to send the plane with constant NY to another process, I write these lines for this:

 MPI_Datatype sub; MPI_Type_vector(NX, NZ, NY*NZ, MPI_DOUBLE, &sub); MPI_Type_commit(&sub); 

but it does not work (I cannot send the desired plane). What's wrong? my test code is here:

 #include <mpi.h> #include <iostream> using namespace std; int main(int argc,char ** argv) { int const IE=100,JE=25,KE=100; int size,rank; MPI_Status status; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&size); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Datatype sub; MPI_Type_vector(KE,IE,IE+(JE-1)*IE,MPI_DOUBLE,&sub); MPI_Type_commit(&sub); if (rank==0){ double*** a=new double**[IE]; for(int i=0;i<IE;i++){ a[i]=new double *[JE]; for(int j=0;j<JE;j++){ a[i][j]=new double [KE]; } } for(int i=0;i<IE;i++){ for(int j=0;j<JE;j++){ for(int k=0;k<KE;k++){ a[i][j][k]=2; }}} for(int i=0;i<IE;i++){ for(int j=0;j<JE;j++){ a[i][j][0]=2; }} MPI_Send(&a[0][0][0],1,sub,1,52,MPI_COMM_WORLD); } if (rank==1){ double*** b=new double**[IE]; for(int i=0;i<IE;i++){ b[i]=new double *[JE]; for(int j=0;j<JE;j++){ b[i][j]=new double [KE]; } } for(int i=0;i<IE;i++){ for(int j=0;j<JE;j++){ for(int k=0;k<KE;k++){ b[i][j][k]=0; }}} MPI_Recv(&b[0][0][0][0],1,sub,0,52,MPI_COMM_WORLD,&status); for(int i=0;i<IE;i++){ for(int j=0;j<JE;j++){ for(int k=0;k<KE;k++){ if(b[i][j][k]>0){ cout<<"b["<<i<<"]["<<j<<"]["<<k<<"]="<<b[i][j][k]<<endl; }}}} } MPI_Finalize(); } 
+4
source share
3 answers

With a 3d matrix, in the general case you will have to use a vector of vectors (because two steps are involved), which is possible, but it is much easier to use MPI_Type_create_subarray () , which allows you to cut the plate of the multidimensional array that you want.

Update . One of the problems in the code above is that the 3D array you selected is not contiguous; this collection of IE * JE allocates 1d arrays that may or may not be next to each other. Thus, there is no reliable way to extract the data plane from it.

You need to do something like this:

 double ***alloc3d(int l, int m, int n) { double *data = new double [l*m*n]; double ***array = new double **[l]; for (int i=0; i<l; i++) { array[i] = new double *[m]; for (int j=0; j<m; j++) { array[i][j] = &(data[(i*m+j)*n]); } } return array; } 

Then the data is in one large cube, as you would expect, with an array of pointers pointing to it. It is the fact that C does not have real multidimensional arrays - it constantly appears with C + MPI.

+7
source

I apologize that your parsed code is still not working correctly. The reason the result seems to be correct is because IE and KE are equal. If you distinguish between them, you will see that the values ​​are written in alternating Y-indices.

If you look at the Jonathan Durst code code memory allocation, which looks like this:

 [x0y0z0] [x0y0z1] [x0y1z0] [x0y1z1] [x1y0z0] [x1y0z1] [x1y1z0] [x1y1z1] //or {x0:(y0:[z0,z1]) ; (y1:[z0,z1])} ; {x1:(y0:[z0,z1]) ; (y1:[z0,z1])} //nx=ny=nz=2 <bl.len> X count X |<- stride ->| 

you will see that you have the number of nx blocks with the length of the nz block and the step between them ny * nz.

Your code works correctly if you change your data type to:

 MPI_Type_vector(IE,KE,KE*JE,MPI_DOUBLE,&sub); 
+1
source

Thanks Jonathan Dursi. Here I want to publish the full code that creates a 3D matrix and uses derived data types for communication (only a plane with constant y will be sent from one process to another). I used the Jonathan Dursi feature published above.

 #include <mpi.h> #include <iostream> #include <math.h> #include <fstream> #include <vector> using namespace std; #define IE 100 #define JE 50 #define KE 100 #define JE_loc 52 double ***alloc3d(int l, int m, int n) { double *data = new double [l*m*n]; double ***array = new double **[l]; for (int i=0; i<l; i++) { array[i] = new double *[m]; for (int j=0; j<m; j++) { array[i][j] = &(data[(i*m+j)*n]); } } return array; } int main(int argc ,char ** argv) { //////////////////////declartion///////////////////////////// int const NFREQS=100,ia=7,ja=7,ka=7; double const pi=3.14159; int i,j,size,rank,k; //MPI_Status status[10]; MPI_Status status; MPI_Request request[10]; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Datatype sub; MPI_Type_vector(KE,IE,IE+(JE-1)*IE,MPI_DOUBLE,&sub); MPI_Type_commit(&sub); double ***a=alloc3d(IE,JE,KE); for (i=0; i<IE; i++) { for (j=0; j<JE; j++) { for (k=0; k<KE; k++) { a[i][j][k]=0.0; } } } if (rank==0) { for (i=0; i<IE; i++) { for (j=0; j<JE; j++) { for (k=0; k<KE; k++) { a[i][j][k]=2; } } } MPI_Send(&a[0][0][0],1,sub,1,52,MPI_COMM_WORLD); } if (rank==1) { MPI_Recv(&a[0][49][0],1,sub,0,52,MPI_COMM_WORLD,&status); for (i=0; i<IE; i++) { for (j=0; j<JE; j++) { for (k=0; k<KE; k++) { if (a[i][j][k]>0) { cout<<"a["<<i<<"]["<<j<<"]["<<k<<"]="<<a[i][j][k]<<endl; } } } } } MPI_Finalize(); } 
0
source

Source: https://habr.com/ru/post/1401794/


All Articles