I am very puzzled by the poor performance that I see when drawing a full-screen background using a textured triangular mesh in OpenGL: only drawing the background and nothing else comes out at a speed of 40 frames per second using the most basic shader and 50 frames per second using the default pipeline .
While 40 frames per second doesn't seem too bad, adding anything else on top of it makes the fps drop, and given that I need to make another 100-200 other meshes, I get a negligible 15 frames per second, which is just not applicable .
I highlighted the appropriate code in the Xcode project available here , but its essence is an example of a canonical texture map:
static const GLfloat squareVertices[] = { -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, }; static const GLfloat texCoords[] = { 0.125, 1.0, 0.875, 1.0, 0.125, 0.0, 0.875, 0.0 }; glClearColor(0.5f, 0.5f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); if ([context API] == kEAGLRenderingAPIOpenGLES2) {
Vertex Shader:
attribute lowp vec4 position; attribute lowp vec2 tex; varying lowp vec2 texCoord; uniform float translate; void main() { gl_Position = position; texCoord = tex; }
Fragment Shader:
varying lowp vec2 texCoord; uniform sampler2D texture; void main() { gl_FragColor = texture2D(texture, texCoord); }
Dividing the size of the rectangle by two doubles the frame rate, so the rendering time clearly depends on the real estate that it draws on the screen. This is completely reasonable, but for me it makes no sense that it is not possible to cover the entire screen with networks displaying OpenGL textures at a speed of more than 15 frames per second.
But there are hundreds of games that do this, so it’s possible, and I have to do something wrong, but what is it?