This is an almost verbatim copy of my answer fooobar.com/questions/370794 / ... with a few changes to fit this question.
C determines (and C ++ follows) that pointers can be written in integers, namely of type uintptr_t , and that if the integer thus obtained is returned to the original type of the pointer from which it is obtained, it will give the original pointer.
Then there is pointer arithmetic, which means that if I have two pointers pointing to the same object, I can distinguish between them, which leads to an integer (like ptrdiff_t ) and that the integer is added or subtracted to any of the original pointers will give another. It also determines that by adding 1 to the pointer, a pointer is assigned to the next element of the indexed object. Also, the difference between two uintptr_t divided by sizeof(type pointed to) pointers of the same object should be equal to subtracting pointers. Last but not least, uintptr_t values ββcan be arbitrary. They can also be opaque handles. They do not have to be addresses (although most implementations do it this way because it makes sense).
Now we can look at the infamous null pointer. C defines a pointer for which a value of type uintptr_u 0 is selected as an invalid pointer. Note that this is always 0 in your source code. On the server side, in a compiled program, the binary value used to actually represent it on the machine may be something completely different! This is usually not the case, but it may be. C ++ is the same, but C ++ does not allow as much implicit casting as C, so you need to explicitly point 0 to void* . In addition, since the null pointer does not refer to the object and therefore does not have a dereferenced value, the arithmetic of the size pointer is undefined for the null pointer. A null pointer that refers to an object also means that there is no definition for a reasonable translation of it to a typed pointer.
So, if it's all undefined, why does this macro work in the end? Because most implementations (meaning compilers) are extremely gullible, and compilers are extremely lazy. The integer value of a pointer in most implementations is simply the value of the pointer itself on the backend side. Thus, the null pointer is actually 0. And although the arithmetic of the pointer on the null pointer is not checked, most compilers will silently accept it if the pointer has received a certain type, even if it does not make sense. char is a type of "unit of size" if you want to say that. Thus, the arithmetic of a cast pointer is similar to the artemics at server-side addresses.
To make the long story shorter, it just doesn't make sense to try to do the magic of the pointer with the intended result, to be an offset on the side of the C language, it just doesn't work that way.
Let me step back for a moment and remember what we are actually trying to do. The initial problem was that the initial functions of the OpenGL vertex vertex array perceive the pointer as their data parameter, but for vertex buffer objects we really want to specify the byte offset in our data, which is a number. For the C compiler, the function takes a pointer (an opaque thing, as we learned). Instead, OpenGL has defined how compilers work. Pointers and their integer equivalents are implemented as the same binary representation by most compilers. So what we should do is force the compiler to call these functions with our number instead of a pointer.
So, technically, the only thing we need to do is tell the compiler βyes, I know that you think this variable a is an integer, and you're right, and the glDrawElements function glDrawElements accepts void* for this data parameter. But guess, what: This integer was obtained from void* ", dropping it to (void*) , and then holding the thumbs up, that the compiler is actually so stupid as to pass an integer value, as for a function.
So, it all boils down to somehow bypassing the old function signature. Casting a pointer is a dirty IMHO method. I would do it a little differently: I would include a function signature:
typedef void (*TFPTR_DrawElementsOffset)(GLenum,GLsizei,GLenum,uintptr_t); TFPTR_DrawElementsOffset myglDrawElementsOffset = (TFPTR_DrawElementsOffset)glDrawElements;
Now you can use myglDrawElementsOffset without any dumb butt, and the offset parameter will be passed to the function without any danger that the compiler could ruin it. This is also the same method that I use in my programs.