This question is about OpenGL ES 2.0 (on Android), but may be more general for OpenGL.
Ultimately, all performance questions are implementation dependent, but if someone can answer this question as a whole or based on their experience, that will be useful. I also write test code.
I have a YUV image (12bpp) that I load into a texture and color conversion in my fragment shader. Everything is working fine, but I would like to see where I can improve performance (in terms of frames per second).
Currently, I actually upload three textures for each image: one for the Y component (type GL_LUMINANCE), one for the U component (type GL_LUMINANCE and, of course, 1/4 the size of the Y component) and one for the V component (type GL_LUMINANCE and, of course, 1/4 the size of the Y component).
Assuming that I can get YUV pixels in any layout (for example, U and V in separate planes or interspersed), would it be better to combine three textures into only two or only one? Obviously, the same number of bytes to click on the GPU no matter how you do it, but maybe with less texture there will be less overhead. At least it will use less texture units. My ideas:
- If the U and V pixels were interleaved with each other, I could load them into a single texture of type GL_LUMINANCE_ALPHA, which has two components.
- I could load the entire YUV image as a single texture (like GL_LUMINANCE, but 3/2 the size of the image), and then in the fragment shader I could call texture2D () three times on the same texture, performing a bit of arithmetic, calculate the correct coordinates, to go to 2D texture to get the correct texture coordinates for components Y, U and V.
source share