This bothered me a bit ... and it is really hard to see any difference in performance, so I ask here:
If my images do not use the alpha channel, should I use "GL_RGB" to store them in the memory of the GFX or "GL_ARGB" card, as if it were faster when processing from its full 32-bit block?
Or do GFX cards automatically convert 24-bit images to 32-bit images to increase their rendering time?
Edit: I have no performance issues, but I just want to do it in the best way! I also want the program to look good on older graphics cards that don't need to optimize things like new cards.
source
share