GlBufferData second arg is GLsizeiptr, not GLsizei, why?

Basically, what is it, why does glBufferData accept a pointer instead of an int? This argument is supposed to be the size of the buffer object, so why not GLsizei?

OpenGL doc on glBufferData https://www.opengl.org/sdk/docs/man/html/glBufferData.xhtml

+5
source share
1 answer

When vertex buffer objects were introduced through the OpenGL extension mechanism , a new type GLsizeiptrARB was created and the following rationale was proposed:

What types should the <offset and <size arguments be used for?

RESOLVED: We define new types that will work well on 64-bit systems similar to C "intptr_t". The new type of "GLintptrARB" should be used instead of GLint when it is expected that the value may exceed 2 billion. A new type of “GLsizeiptrARB” should be used instead of GLsizei whenever it is expected to exceed 2 billion. Both types are defined as significant integers large enough to contain any pointer value. As a result, they are naturally scaled to a larger number of bits on systems with 64-bit or even larger pointers.

The offsets entered in this extension are typed by GLintptrARB, in accordance with other GL parameters, which must be non-negative, but arithmetic in nature (not uint) and not dimensions; for For example, the xoffset argument for TexSubImage * D is of type GLint. Buffer sizes are printed by GLsizeiptrARB.

The idea of ​​creating these unsigned types was considered, but ultimately rejected on the grounds that support buffers of less than 2 GB were not considered important for 32-bit systems.

When this extension was adopted in the main OpenGL, the type compatible with the GLsizeiptrARB extension for the type received the standard name GLsizeiptr , which you see in the function signature today.

+10
source

Source: https://habr.com/ru/post/1206247/


All Articles