How do you represent a normal or texture coordinate using GLshorts?

Many iPhone game performance suggestions revolve around sending less data to the GPU. An obvious suggestion is to use GLshorts instead of GLfloat, where possible, for example, for vertices, normals, or texture coordinates.

What are the specifics of using GLshort for normal or texture coordinates? Is it possible to represent the coordinate of the GLfloat texture at 0.5 when using GLshort? If so, how do you do it? Will it just be SHRT_MAX / 2? That is, a range from 0 to 1 for the GLfloat map from 0 to SHRT_MAX when using the texture coordinate of GLshort?

What about normals? I always created normals with GLfloats and normalized them per unit length. When you use GLshort for normal, do you send an abnormal vector to the GPU? If so, when and how does it normalize? Dividing all components by SHRT_MAX?

+2
source share
3 answers

normals

If you are glEnable( GL_NORMALIZE ), then you can represent the normals as GL_BYTE(or short). The norm, presented in the form of bytes 0xFF 0xCC 0x33, is normalized by the GPU to (0.77, 0.62, 0.15).

Please note that there is a slight performance limitation from GL_NORMALIZE, because the GPU must normalize every normal one.

GL_NORMALIZE , , .

Edit: "trickery", ( , 1.0), .

, ( ) . , OpenGL "" . 0 0.0, 1 1.0, 255 255.0 ( ). .

. . ( .)

+2

OpenGL ES 1.1 , [-1:1] [0: 1] . (. 2.7 )

(for shorts)   n_x = (2c +1)/(2^16 − 1)

GL_NORMALIZE ( , ).

, , ( [0: 1] ...). , .

glMatrixMode(GL_TEXTURE);
glLoadMatrix(matrix_that_does_conversion_based_on_type);
+4

I had success using GLshorts as texture coords, multiplying them by 1000 when creating a VBO, and then dividing by 1000 before using it in the shader. Of course, you will have to weigh the extra calculations against the memory gain on your device.

It's pretty expensive when Googling uses GLshort to improve performance, but apologize for posting in such an old thread.

+3
source

Source: https://habr.com/ru/post/1745408/


All Articles