Unexpected pixel data format when reading from GraphicBuffer

I am currently working on a platform in the native Android platform, where I use GraphicBuffer to allocate memory and then create an EGLImage from it. This is then used as a texture in OpenGL, which I render (with a simple full-screen quad).

The problem is that when I read the received pixel data from GraphicBuffer, I expect it to be in linear RGBA format in memory, but the result will be a texture that contains three parallel smaller clones of the image and with overlapping pixels. Perhaps this description does not say much, but the point is that the actual pixel data makes sense, but the memory layout seems to be something other than linear RGBA. I assume this is because graphics drivers store pixels in an internal format other than linear RGBA.

If I render the standard OpenGL texture and read with glReadPixels, everything works fine, so I assume the problem is with my own memory allocation using GraphicBuffer.

If the reason is the layout of the driver’s internal memory, is there a way to force the layout to use linear RGBA? I tried most of the usage flags provided by the GraphicBuffer constructor without success. If not, is there a way to output the data in different ways in the shader in order to “cancel” the memory layout?

I am creating Android 4.4.3 for Nexus 5.

//Allocate graphicbuffer outputBuffer = new GraphicBuffer(outputFormat.width, outputFormat.height, outputFormat.bufferFormat, GraphicBuffer::USAGE_SW_READ_OFTEN | GraphicBuffer::USAGE_HW_RENDER | GraphicBuffer::USAGE_HW_TEXTURE); /* ... */ //Create EGLImage from graphicbuffer EGLint eglImageAttributes[] = {EGL_WIDTH, outputFormat.width, EGL_HEIGHT, outputFormat.height, EGL_MATCH_FORMAT_KHR, outputFormat.eglFormat, EGL_IMAGE_PRESERVED_KHR, EGL_FALSE, EGL_NONE}; EGLClientBuffer nativeBuffer = outputBuffer->getNativeBuffer(); eglImage = _eglCreateImageKHR(display, EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, nativeBuffer, eglImageAttributes); /* ... */ //Create output texture glGenTextures(1, &outputTexture); glBindTexture(GL_TEXTURE_2D, outputTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); _glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImage); /* ... */ //Create target fbo glGenFramebuffers(1, &targetFBO); glBindFramebuffer(GL_FRAMEBUFFER, targetFBO); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0); glBindFramebuffer(GL_FRAMEBUFFER, 0); /* ... */ //Read from graphicbuffer const Rect lockBoundsOutput(quadRenderer->outputFormat.width, quadRenderer->outputFormat.height); status_t statusgb = quadRenderer->getOutputBuffer()->lock(GraphicBuffer::USAGE_SW_READ_OFTEN, &result); 
+2
source share
1 answer

I managed to find the answer myself, and I was wrong. The simple reason was that although I was painting a 480 × 1080 texture, the allocated memory was expanded to 640x1080, so I just needed to remove the indentation after each line, and the output texture made sense.

0
source

Source: https://habr.com/ru/post/1201156/


All Articles