I am unable to pass a vector type parameter (uint8) to the OpenCL kernel function from host code in C.
In the host, I have data in the array:
cl_uint dataArr[8] = { 1, 2, 3, 4, 5, 6, 7, 8 };
(My real data is more than just [1, 8], this is just for convenience of explanation.)
Then I transfer the data to a buffer that should be passed to the kernel:
cl_mem kernelInputData = clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, sizeof(cl_uint)*8, dataArr, NULL);
Then I pass this buffer to the kernel:
clSetKernelArg(kernel, 0, sizeof(cl_mem), &kernelInputData);
And the kernel function signature looks something like this:
kernel void kernelFunction(constant uint8 *vectorPtr)
However, the kernel does not seem to receive the correct input from the pointer to kernelInputData . When I pass values โโfrom inside the kernel, I see that vectorPtr points to something with this structure: ( 1, 2, 3, 4, 5, ?, ?, ? ) , Where the question marks are usually 4293848814 , but sometimes 0 . In any case, not what they should be.
What am I doing wrong?
EDIT:
I switched from using an array to cl_uint8 on the host side. Now I have:
cl_uint8 dataVector = { 1, 2, 3, 4, 5, 6, 7, 8 };
And I pass this vector to the kernel like this:
clSetKernelArg(kernel, 0, sizeof(cl_uint8), &dataVector);
And the kernel function signature looks something like this:
kernel void kernelFunction(constant uint8 *vectorPtr)
However, running this code gives me the CL_INVALID_ARG_SIZE error on clSetKernelArg() . This error disappears if I switch the ARG_SIZE parameter to sizeof(cl_uint8 *) , but then I get the EXC_BAD_ACCESS error in __dynamic_cast inside clSetKernelArg() .
My device:
Apple Macbook Pro (Mid-2009)
OSX 10.8 Mountain Lion
NVIDIA GeForce 9400M
Opencl 1.0
CLH 1.0