You have data race in your loop counters:
#pragma omp for for (i=0; i<nx; i++) { for (j=0; j<ny; j++) { // <--- data race for (k=0; k<nz; k++) { // <--- data race arr_par[i][j][k] = i*j + k; } } }
Since neither the j or k class is set to the private data exchange class, their values ββcan exceed the corresponding limits when several threads try to increase them at a time, which will lead to access without binding to arr_par . The possibility of increasing the number of threads j or k increases with the number of iterations.
The best way to handle these cases is to simply declare the loop variables inside the loop statement itself:
#pragma omp for for (int i=0; i<nx; i++) { for (int j=0; j<ny; j++) { for (int k=0; k<nz; k++) { arr_par[i][j][k] = i*j + k; } } }
Another way is to add the private(j,k) clause to the head of the parallel region:
#pragma omp parallel default(shared) private(threadid) private(j,k)
It is not necessary to do i private in your case, since the loop variable of parallel loops is implicitly made private. However, if i used elsewhere in the code, it might make sense to make it confidential to prevent other data races.
Also, do not use clock() to measure time for parallel applications, since on most Unix OS it returns the total processor time for all threads. Use omp_get_wtime() instead.
source share