Assigning anchor points in OpenGL?

I don’t understand what is the purpose of linking points (e.g. GL_ARRAY_BUFFER ) in OpenGL. In my understanding, glGenBuffers() creates a kind of pointer to a vertex buffer object located somewhere inside the GPU memory.

So:

 glGenBuffers(1, &bufferID) 

means that now I have handle, bufferID, up to 1 vertex object on the graphics card. Now I know that the next step is to bind bufferID to the anchor point

 glBindBuffer(GL_ARRAY_BUFFER, bufferID) 

so that I can use this anchor point to send data down using the glBufferData() function as follows:

 glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW) 

But why can't I just use bufferID to indicate where I want to send the data? Sort of:

 glBufferData(bufferID, sizeof(data), data, GL_STATIC_DRAW) 

Then when you call the draw function, I would just set whatever the VBO identifier is in, I want the draw function to draw. Sort of:

 glDrawArrays(bufferID, GL_TRIANGLES, 0, 3) 

Why do we need an extra indirectness step with glBindBuffers ?

+5
source share
1 answer

OpenGL uses object snap points for two things: to designate the object to be used as part of the rendering process, and to be able to modify the object.

Why he uses them for the first is simple: OpenGL requires a lot of objects for rendering.

Consider your overly simplified example:

 glDrawArrays(bufferID, GL_TRIANGLES, 0, 3) 

This API does not allow me to have separate vertex attributes from separate buffers. Of course, you can suggest glDrawArrays(GLint count, GLuint *object_array, ...) . But how do you associate a specific buffer object with a specific vertex attribute? Or how do you have 2 attributes from buffer 0 and a third attribute from buffer 1? This is what I can do right now with the current API. But your proposed cannot handle it.

And even this discards many other objects that need to be visualized: program / pipeline objects, texture objects, UBOs, SSBOs, convert feedback objects, request objects, etc. Having all the necessary objects specified in one command will be fundamentally inoperative (and this leaves aside the cost of productivity).

And every time the API needs to add a new type of object, you will have to add new variants of glDraw* functions. And right now there are over a dozen such features . Your path would give us hundreds.

So instead, OpenGL defines ways for you to say: "The next time I create, use this object this way for this process." This means that binds the object to use.


But why can't I just use bufferID to indicate where I want to send the data?

This is about binding an object to change the object, not to mention that it will be used. That ... is another matter.

The obvious answer is: "You cannot do this because the OpenGL API (before 4.5) does not have a function that allows you to do this." But I rather suspect that the question is why OpenGL does not have such APIs (before 4.5, where glNamedBufferStorage exists).

In fact, the fact that 4.5 has such functions proves that there is no technical reason for the 4.5 API for OpenGL bind-object-to-modify. It really was a β€œsolution” that arose due to the evolution of the OpenGL API from 1.0, thanks to the next path of least resistance. Repeatedly.

In fact, almost every bad decision made by OpenGL can be traced back to the point that it has taken the path of least resistance in the API. But I'm distracted.

In OpenGL 1.0, there was only one kind of object: the display of list objects. This means that even textures were not saved in the objects. Therefore, every time you switched textures, you had to re-specify the entire texture with glTexImage*D This means reloading. Now you could (and people) wrap each texture creation in the display list, which allowed you to switch textures by executing this display list. And I hope the driver would understand that you are doing this, and instead allocate video memory, etc. Respectively.

So, when 1.1 appeared, OpenGL ARB realized how smart and stupid it was. In this way, they created texture objects that encapsulate both the texture memory store and the different state inside. When you wanted to use a texture, you tied it. But there was a snag. Namely, how to change it.

See 1.0, had a bunch of pre-existing functions like glTexImage*D , glTexParamter and the like. They change the state of the texture. Now ARB can add new functions that do the same, but accept texture objects as parameters.

But that would mean dividing all OpenGL users into 2 camps: those who used texture objects and those who didn't. This meant that if you wanted to use texture objects, you had to rewrite all existing code that changed the textures. If you had any function that called a bunch of glTexParameter calls for the current texture, you will have to modify this function to call the new function of the texture object. But you also have to change your function, which calls it so that the texture object it works on is used as a parameter.

And if this function did not belong to you (because it was part of the library that you used), then you could not even do it.

Thus, ARB decided to preserve these old functions and simply make them behave differently based on whether the texture is contextual or not. If someone were bound, glTexParameter / etc would change the bound texture, not the contextual normal texture.

This one solution established a common paradigm shared by almost all OpenGL objects .

ARB_vertex_buffer_object used this paradigm for the same reason. Notice how the various gl*Pointer functions ( glVertexAttribPointer , etc.) work in relation to buffers. You must bind the buffer to GL_ARRAY_BUFFER , and then call one of these functions to configure the attribute array. When a buffer is bound to this slot, the function will select this and treat the pointer as an offset into the buffer that was associated when the *Pointer function was called.

Why? For the same reason: ease of compatibility (or promotion of laziness, depending on how you want to see it). ATI_vertex_array_object was supposed to create new analogs for the gl*Pointer functions. While ARB_vertex_buffer_object just messed up existing entry points.

Users did not have to change from glVertexPointer to glVertexBufferOffset or another function. All they needed to do was bind the buffer before calling a function that sets vertex information (and, of course, changes pointers to byte offsets).

This also means that they did not need to add a bunch of glDrawElementsWithBuffer -type functions to render with indexes that come from buffer objects.

So it was a good idea in the short term. But, as with most short-term solutions, over time, it becomes less reasonable.

Of course, if you have access to GL 4.5 / ARB_direct_state_access, you can do what they were supposed to do initially.

+10
source

Source: https://habr.com/ru/post/1257852/


All Articles