Cuda code #define error, expected a ")"

In the following code, if I give #define N 65536 above #if FSIZE, I get the following error:

#if FSIZE==1
__global__ void compute_sum1(float *a, float *b, float *c, int N)
{
#define N 65536
        int majorIdx = blockIdx.x;
        int subIdx = threadIdx.x;

        int idx=majorIdx*32+subIdx ;

        float sum=0;

        int t=4*idx;
        if(t<N)
        {
                c[t]= a[t]+b[t];
                c[t+1]= a[t+1]+b[t+1];
                c[t+2]= a[t+2]+b[t+2];
                c[t+3]= a[t+3]+b[t+3];
        }
        return;
}
#elif FSIZE==2
__global__ void compute_sum2(float2 *a, float2 *b, float2 *c, int N)
#define N 65536
{
        int majorIdx = blockIdx.x;
        int subIdx = threadIdx.x;

        int idx=majorIdx*32+subIdx ;

        float sum=0;

        int t=2*idx;
        if(t<N)
        {
                c[t].x= a[t].x+b[t].x;
                c[t].y= a[t].y+b[t].y;
                c[t+1].x= a[t+1].x+b[t+1].x;
                c[t+1].y= a[t+1].y+b[t+1].y;
        }
        return ;
}

float1vsfloat2.cu (10): error: expected a ")"

This problem is a little annoying, and I would really like to know why this is happening. I have the feeling that I'm losing sight of something really stupid. Btw, this section of code is at the top of the file. Don't even #include in front of it. I would really appreciate any possible explanation.

+3
source share
1 answer

The preprocessor modifies this line:

__global__ void compute_sum1(float *a, float *b, float *c, int N)

to

__global__ void compute_sum1(float *a, float *b, float *c, int 65536)

which is not a valid CUDA code.

+12
source

Source: https://habr.com/ru/post/1791531/


All Articles