How to distinguish int from const GLvoid *?

In my OpenGL cross-platform application, I want to draw using vertex buffer objects. However, I run into problems when calling glDrawRangeElements.

glDrawRangeElements(GL_TRIANGLES, start, start + count, count, GL_UNSIGNED_INT, static_cast<GLvoid *> (start * sizeof(unsigned int))); 

The compiler (CLang on Mac OS X) does not like the last argument, "error: cannot discard from type" unsigned long "to pointer type" GLvoid * "(aka 'void *')". The OpenGL API defines the type of the last arguments as const GLvoid * and expects a pointer when this api is used with vertex arrays. However, I understand that when using vertex buffer objects instead of a pointer, it is expected that an integer value representing the offset into the buffer data will be passed. This is what I am trying to do, and therefore I have to give up. How to match api requirements with compiler imposing strict checks?

+4
source share
5 answers

I got it for compilation using CLang and C ++ 11 when I used vintage c style styling.

 glDrawRangeElements(GL_TRIANGLES, start, start + count, count, GL_UNSIGNED_INT, (GLvoid *) (start * sizeof(unsigned int))); 

Alternatives that I liked less but were also accepted by the compiler were

 glDrawRangeElements(GL_TRIANGLES, start, start + count, count, GL_UNSIGNED_INT, reinterpret_cast<GLvoid *>(static_cast<uintptr_t>(start * sizeof(unsigned int)))); glDrawRangeElements(GL_TRIANGLES, start, start + count, count, GL_UNSIGNED_INT, (char *)(0) + start * sizeof(unsigned int)); 
0
source

Because it is used so often, people often use macros to convert their type. It can be defined as follows:

 #define BUFFER_OFFSET(i) ((char *)NULL + (i)) 

This is a clean and safe way to throw, because it makes no assumptions that integers and pointer types are of the same size, which they often do not use on 64-bit systems.

Since I personally prefer C ++ style casts and don't use NULL, I would define it like this:

 #define BUFFER_OFFSET(idx) (static_cast<char*>(0) + (idx)) 
+6
source

Like this:

reinterpret_cast <GLvoid *> (start * sizeof(unsigned int));

+1
source

This is an almost verbatim copy of my answer fooobar.com/questions/370794 / ... with a few changes to fit this question.


C determines (and C ++ follows) that pointers can be written in integers, namely of type uintptr_t , and that if the integer thus obtained is returned to the original type of the pointer from which it is obtained, it will give the original pointer.

Then there is pointer arithmetic, which means that if I have two pointers pointing to the same object, I can distinguish between them, which leads to an integer (like ptrdiff_t ) and that the integer is added or subtracted to any of the original pointers will give another. It also determines that by adding 1 to the pointer, a pointer is assigned to the next element of the indexed object. Also, the difference between two uintptr_t divided by sizeof(type pointed to) pointers of the same object should be equal to subtracting pointers. Last but not least, uintptr_t values ​​can be arbitrary. They can also be opaque handles. They do not have to be addresses (although most implementations do it this way because it makes sense).

Now we can look at the infamous null pointer. C defines a pointer for which a value of type uintptr_u 0 is selected as an invalid pointer. Note that this is always 0 in your source code. On the server side, in a compiled program, the binary value used to actually represent it on the machine may be something completely different! This is usually not the case, but it may be. C ++ is the same, but C ++ does not allow as much implicit casting as C, so you need to explicitly point 0 to void* . In addition, since the null pointer does not refer to the object and therefore does not have a dereferenced value, the arithmetic of the size pointer is undefined for the null pointer. A null pointer that refers to an object also means that there is no definition for a reasonable translation of it to a typed pointer.

So, if it's all undefined, why does this macro work in the end? Because most implementations (meaning compilers) are extremely gullible, and compilers are extremely lazy. The integer value of a pointer in most implementations is simply the value of the pointer itself on the backend side. Thus, the null pointer is actually 0. And although the arithmetic of the pointer on the null pointer is not checked, most compilers will silently accept it if the pointer has received a certain type, even if it does not make sense. char is a type of "unit of size" if you want to say that. Thus, the arithmetic of a cast pointer is similar to the artemics at server-side addresses.

To make the long story shorter, it just doesn't make sense to try to do the magic of the pointer with the intended result, to be an offset on the side of the C language, it just doesn't work that way.

Let me step back for a moment and remember what we are actually trying to do. The initial problem was that the initial functions of the OpenGL vertex vertex array perceive the pointer as their data parameter, but for vertex buffer objects we really want to specify the byte offset in our data, which is a number. For the C compiler, the function takes a pointer (an opaque thing, as we learned). Instead, OpenGL has defined how compilers work. Pointers and their integer equivalents are implemented as the same binary representation by most compilers. So what we should do is force the compiler to call these functions with our number instead of a pointer.

So, technically, the only thing we need to do is tell the compiler β€œyes, I know that you think this variable a is an integer, and you're right, and the glDrawElements function glDrawElements accepts void* for this data parameter. But guess, what: This integer was obtained from void* ", dropping it to (void*) , and then holding the thumbs up, that the compiler is actually so stupid as to pass an integer value, as for a function.

So, it all boils down to somehow bypassing the old function signature. Casting a pointer is a dirty IMHO method. I would do it a little differently: I would include a function signature:

 typedef void (*TFPTR_DrawElementsOffset)(GLenum,GLsizei,GLenum,uintptr_t); TFPTR_DrawElementsOffset myglDrawElementsOffset = (TFPTR_DrawElementsOffset)glDrawElements; 

Now you can use myglDrawElementsOffset without any dumb butt, and the offset parameter will be passed to the function without any danger that the compiler could ruin it. This is also the same method that I use in my programs.

+1
source

You can try calling it like this:

 // The first to last vertex is 0 to 3 // 6 indices will be used to render the 2 triangles. This make our quad. // The last parameter is the start address in the IBO => zero glDrawRangeElements(GL_TRIANGLES, 0, 3, 6, GL_UNSIGNED_SHORT, NULL); 

Check out the OpenGL tutorial .

-2
source

Source: https://habr.com/ru/post/973209/


All Articles