GLSL + OpenGL Transition from a state machine

Hi guys, I started moving one of my projects from a stationary pipeline, so to try everything, I tried to write a shader that would just pass OpenGL matrices and convert the vertex with that, and then start calculating my own as soon as I knew that it worked. I thought it would be a simple task, but even that would not work.

I started with this shader for a regular fixed pipeline:

void main(void) { gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; gl_TexCoord[0] = gl_MultiTexCoord0; } 

Then I changed it to the following:

 uniform mat4 model_matrix; uniform mat4 projection_matrix; void main(void) { gl_Position = model_matrix * projection_matrix * gl_Vertex; gl_TexCoord[0] = gl_MultiTexCoord0; } 

Then, I extract OpenGL matrices like this and pass them to the shader using this code:

  [material.shader bindShader]; GLfloat modelmat[16]; GLfloat projectionmat[16]; glGetFloatv(GL_MODELVIEW_MATRIX, modelmat); glGetFloatv(GL_PROJECTION_MATRIX, projectionmat); glUniformMatrix4fv([material.shader getUniformLocation:"model_matrix"], 1, GL_FALSE, modelmat); glUniformMatrix4fv([material.shader getUniformLocation:"projection_matrix"], 1, GL_FALSE, projectionmat ); ... Draw Stuff 

For some reason, it doesnโ€™t draw anything (Iโ€™m 95% positive, these matrices are correct before passing them by the way) Any ideas?

+4
source share
3 answers

The problem was that my matrix multiplication order was wrong. I did not know that operations are not commutative.

The correct order should be:

 projection * modelview * vertex 

Thanks to ltjax and doug65536

+2
source

For mathematical math, try using an external library such as GLM . They also have some basic examples on how to create the necessary matrices and transform the projection model * view *.

+2
source

Use the OpenGL 3.3 shader language. OpenGL 3.3 is roughly comparable to DirectX10, in hardware.

Do not use legacy features. Almost everything in your first void main example is deprecated. You should explain your ins and outs if you expect to use a high-performance driver code path. Outdated functionality is also much more full of driver errors.

Use the newer, more explicit style of declaring inputs and outputs and installing them in your code. This is really good. I thought it would be ugly, but actually it was pretty easy (I'm sorry I just did it before).

FYI, the last time I looked at the lowest common denominator for OpenGL (2012), it was OpenGL 3.3. Almost all AMD and NVidia graphics cards with any gaming ability will have OpenGL 3.3. And they are for a while, so any code that you are currently writing for OpenGL 3.3 will run on a typical GPU with a lower or better level.

0
source

Source: https://habr.com/ru/post/1347584/


All Articles