Confusion between the order of the C ++ matrix and OpenGL (row major versus column major)

I am very confused in the definitions of matrices. I have a matrix class that contains a float[16] , which I assumed to be a string based on the following observations:

 float matrixA[16] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 }; float matrixB[4][4] = { { 0, 1, 2, 3 }, { 4, 5, 6, 7 }, { 8, 9, 10, 11 }, { 12, 13, 14, 15 } }; 

matrixA and matrixB both have the same linear layout in memory (i.e. all numbers are in order). According to http://en.wikipedia.org/wiki/Row-major_order this points to a line layout.

 matrixA[0] == matrixB[0][0]; matrixA[3] == matrixB[0][3]; matrixA[4] == matrixB[1][0]; matrixA[7] == matrixB[1][3]; 

Therefore, matrixB[0] = row 0, matrixB[1] = row 1, etc. Again, this indicates a line layout.

My problem / confusion occurs when I create a translation matrix that looks like this:

 1, 0, 0, transX 0, 1, 0, transY 0, 0, 1, transZ 0, 0, 0, 1 

which is laid out in memory as { 1, 0, 0, transX, 0, 1, 0, transY, 0, 0, 1, transZ, 0, 0, 0, 1 } .

Then, when I call glUniformMatrix4fv , I need to set the transpose flag to GL_FALSE, indicating that it has a column value, otherwise it will convert, for example, translation / scale, etc. t applies correctly:

If the transposition is GL_FALSE, it is assumed that each matrix is ​​presented in the main order. If the transposition is GL_TRUE, each matrix is ​​supposed to be delivered in the main order.

Why should my matrix, which appears to have a large row value, need to be passed to OpenGL as the main column?

+49
c ++ math matrix opengl
Jul 18 '13 at 7:46
source share
3 answers

the matrix notation used in the opengl documentation does not describe the memory layout for OpenGL layouts

If you think that it will be easier if you forget / forget about the whole "row / column major" thing. This is because, in addition to the main row / column, the programmer can also decide how he wants to lay out the matrix in memory (whether neighboring elements form rows or columns), in addition to the notation, which adds to the confusion.

OpenGL matrices have the same memory layout as directx matrices .

 xx xy xz 0 yx yy yz 0 zx zy zz 0 px py pz 1 

or

 { xx xy xz 0 yx yy yz 0 zx zy zz 0 px py pz 1 } 
  • x, y, z are 3-component vectors describing the matrix coordinate system (local coordinate system within a relatively global coordinate system).

  • p is a three-component vector that describes the origin of the matrix coordinate system.

This means that the translation matrix should be laid out in memory as follows:

 { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, transX, transY, transZ, 1 }. 

Leave it and the rest should be easy.

--- quote from old opengl faq -




9.005 Are OpenGL matrices columns or string?

For programming purposes, OpenGL matrices are 16-digit arrays with basic vectors that are sequentially located in memory. Translation components occupy the 13th, 14th and 15th elements of the 16-element matrix, where the indices are numbered from 1 to 16, as described in section 2.11.2 of the OpenGL 2.1 Specification.

The major column and the row are purely symbolic. Note that post-multiplication with base matrix matrices gives the same result as pre-multiplication by base matrix. The OpenGL specification and the OpenGL reference guide use column notation. You can use any notation if it is clearly indicated.

Unfortunately, using the column format in the specification and the blue book has led to endless confusion in the OpenGL developer community. The designation of the major column suggests that the matrices are not laid out in memory, as the programmer expected.




+53
Jul 18 '13 at 8:30
source share

To summarize the answers of SigTerm and dsharlet: the usual way to convert a vector to GLSL is to multiply the vector by the transformation matrix:

 mat4 T; vec4 v; vec4 v_transformed; v_transformed = T*v; 

For this to work, OpenGL expects the T memory layout to be as described by SigTerm,

 {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, transX, transY, transZ, 1 } 

which is also called the "main column". However, in your shader code (as indicated by your comments) you multiplied the vector by the transformation matrix:

 v_transformed = v*T; 

which gives only the correct result if T transposed, i.e. has a layout

 { 1, 0, 0, transX, 0, 1, 0, transY, 0, 0, 1, transZ, 0, 0, 0, 1 } 

(i.e. "string major"). Since you already provided the correct layout for your shader, namely the major line, you did not need to set the transpose glUniform4v flag.

+30
Oct 08 '13 at
source share

You are dealing with two separate problems.

First, your examples relate to the memory layout. Your array [4] [4] is basic because you used the convention set by C multidimensional arrays to fit your linear array.

The second problem is the agreement on how you interpret matrices in your program. glUniformMatrix4fv is used to set the shader parameter. Regardless of whether your transform is designed to convert a row or column vector , the question is how you use the matrix in your shader code. Since you are saying that you need to use column vectors, I assume that your shader code uses matrix A and column vector x to compute x ' = A x .

I would say the glUniformMatrix documentation is confusing. Description of the transpose parameter is really a devious way to simply say that the matrix is ​​transposed or not. OpenGL itself transfers this data to your shader, regardless of whether you want to transpose it or not, this is an agreement that you must establish for your program.

This link has a good discussion: http://steve.hollasch.net/cgindex/math/matrix/column-vec.html

+12
Jul 18 '13 at 8:45
source share



All Articles