GLSL Instancing - maximum number of inputs for given vertices?

I am trying to implement instancing in my OpenGL program. I got it to work, and then decided to make my GLSL code more efficient by sending the Model-View-Projection multiplication matrix as an input to the GLSL program so that the CPU would calculate it for each instance opposite the GPU. Here is my vertex shader code (most of this question is not relevant to my question):

#version 330 core // Input vertex data, different for all executions of this shader. layout(location = 0) in vec3 vertexPosition_modelspace; layout(location = 2) in vec3 vertexColor; layout(location = 3) in vec3 vertexNormal_modelspace; layout(location = 6) in mat4 models; layout(location = 10) in mat4 modelsV; layout(location = 14) in mat4 modelsVP; // Output data ; will be interpolated for each fragment. out vec3 newColor; out vec3 Position_worldspace; out vec3 Normal_cameraspace; out vec3 EyeDirection_cameraspace; // Values that stay constant for the whole mesh. uniform mat4 MVP; uniform mat4 MV; uniform mat4 P; uniform mat4 V; uniform mat4 M; uniform int num_lights; uniform vec3 Lights[256]; void main(){ // Output position of the vertex, in clip space : MVP * position gl_Position = P * modelsV * vec4(vertexPosition_modelspace,1); // Position of the vertex, in worldspace : M * position Position_worldspace = (models * vec4(vertexPosition_modelspace,1)).xyz; // Vector that goes from the vertex to the camera, in camera space. // In camera space, the camera is at the origin (0,0,0). vec3 vertexPosition_cameraspace = ( modelsV * vec4(vertexPosition_modelspace,1)).xyz; EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace; // Normal of the the vertex, in camera space Normal_cameraspace = ( modelsV * vec4(vertexNormal_modelspace,0)).xyz; // UV of the vertex. No special space for this one. newColor = vertexColor; } 

The above code works, but only because I do not use the latest VP input models to calculate gl_position. If I use it (instead of calculating P * modelsV), the instances will not be drawn, and I will get this error:

 Linking program Compiling shader : GLSL/meshColor.vertexshader Compiling shader : GLSL/meshColor.fragmentshader Linking program Vertex info 0(10) : error C5102: input semantic attribute "ATTR" has too big of a numeric index (16) 0(10) : error C5102: input semantic attribute "ATTR" has too big of a numeric index (16) 0(10) : error C5041: cannot locate suitable resource to bind variable "modelsVP". Possibly large array. 

I am sure that I am binding it correctly in my OpenGL code, because if I change the input models of VV to V models so that it is 10 instead of 14, I can use it, but not Model V. Is there a maximum number of inputs for the vertex shader? I really can't think of any other idea, why else would I get this error ...

I will include more of my OpenGL code, which matters here, but I'm sure it has been fixed (this is not all in one class or method):

 // Buffer data for VBO. The numbers must match the layout in the GLSL code. #define position 0 #define uv 1 #define color 2 #define normal 3 #define tangent 4 #define bitangent 5 #define model 6 // 4x4 matrices take 4 positions #define modelV 10 #define modelVP 14 #define num_buffers 18 GLuint VBO[num_buffers]; glGenBuffers(num_buffers, VBO); for( int i=0; i<ModelMatrices.size(); i++ ) { mvp.push_back( projection * view * ModelMatrices.at(i) ); mv.push_back( view * ModelMatrices.at(i) ); } glBindBuffer(GL_ARRAY_BUFFER, VBO[model]); glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * ModelMatrices.size(), &ModelMatrices[0], GL_DYNAMIC_DRAW); for (unsigned int i = 0; i < 4 ; i++) { glEnableVertexAttribArray(model + i); glVertexAttribPointer(model + i, 4, GL_FLOAT, GL_FALSE, sizeof(glm::mat4), (const GLvoid*)(sizeof(GLfloat) * i * 4)); glVertexAttribDivisor(model + i, 1); } glBindBuffer(GL_ARRAY_BUFFER, VBO[modelV]); glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * mv.size(), &mv[0], GL_DYNAMIC_DRAW); for (unsigned int i = 0; i < 4 ; i++) { glEnableVertexAttribArray(modelV + i); glVertexAttribPointer(modelV + i, 4, GL_FLOAT, GL_FALSE, sizeof(glm::mat4), (const GLvoid*)(sizeof(GLfloat) * i * 4)); glVertexAttribDivisor(modelV + i, 1); } glBindBuffer(GL_ARRAY_BUFFER, VBO[modelVP]); glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * mvp.size(), &mvp[0], GL_DYNAMIC_DRAW); for (unsigned int i = 0; i < 4 ; i++) { glEnableVertexAttribArray(modelVP + i); glVertexAttribPointer(modelVP + i, 4, GL_FLOAT, GL_FALSE, sizeof(glm::mat4), (const GLvoid*)(sizeof(GLfloat) * i * 4)); glVertexAttribDivisor(modelVP + i, 1); } 
+4
source share
2 answers

OpenGL mandate implementations offer a minimum of 16 attributes with four components. Therefore, index 16 is not guaranteed to support all implementations; see GL_MAX_VERTEX_ATTRIBS for more details.

Mat4 vertex attributes are considered 4 4-component attributes, so index 14 is out of range for implementations that only support 16 attributes in the form of 4-component vertices.

+9
source

You are using too many vertex attributes. Here's how to reduce the number of attributes without changing anything about your code (and any functional changes are improvements). The following assumes that models is a model-world matrix, modelsV is a model-camera matrix, and that modelsVP is a model-projection matrix:

 #version 330 core // Input vertex data, different for all executions of this shader. layout(location = 0) in vec3 vertexPosition_modelspace; layout(location = 2) in vec3 vertexColor; layout(location = 3) in vec3 vertexNormal_modelspace; layout(location = 6) in mat4 modelsV; // Output data ; will be interpolated for each fragment. out vec3 newColor; //The fragment shader should work in *camera* space, not world space. out vec4 Position_cameraspace; out vec3 Normal_cameraspace; //out vec3 EyeDirection_cameraspace; Can be computed from Position_cameraspace in the FS. // Values that stay constant for the whole mesh. uniform mat4 P; void main() { Position_cameraspace = modelsV * vec4(vertexPosition_modelspace, 1.0); gl_Position = P * Position_cameraspace; Normal_cameraspace = ( modelsV * vec4(vertexNormal_modelspace,0)).xyz; newColor = vertexColor; } 

Cm? Isn't that a lot easier? Fewer mouthpieces in the vertex shader, fewer exits to the fragment shader, fewer mathematical calculations and fewer vertex attributes.

All you have to do is change your fragment shader to use the position of the camera, not the position in the world space. What should be a reasonably easy change.

+7
source

Source: https://habr.com/ru/post/1499889/


All Articles