Render YUV video in OpenGL from ffmpeg using CVPixelBufferRef and shaders

I use ffmpeg YUV to render frames using the iOS 5.0 method "CVOpenGLESTextureCacheCreateTextureFromImage".

I use the GLCameraRipple apple as an example

My result on the iPhone screen is: iPhone screen

I need to know what I'm doing wrong.

I put part of my code to find errors.

ffmpeg configure frames:

ctx->p_sws_ctx = sws_getContext(ctx->p_video_ctx->width, ctx->p_video_ctx->height, ctx->p_video_ctx->pix_fmt, ctx->p_video_ctx->width, ctx->p_video_ctx->height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL); // Framebuffer for RGB data ctx->p_frame_buffer = malloc(avpicture_get_size(PIX_FMT_YUV420P, ctx->p_video_ctx->width, ctx->p_video_ctx->height)); avpicture_fill((AVPicture*)ctx->p_picture_rgb, ctx->p_frame_buffer,PIX_FMT_YUV420P, ctx->p_video_ctx->width, ctx->p_video_ctx->height); 

My visualization method:

 if (NULL == videoTextureCache) { NSLog(@"displayPixelBuffer error"); return; } CVPixelBufferRef pixelBuffer; CVPixelBufferCreateWithBytes(kCFAllocatorDefault, mTexW, mTexH, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, buffer, mFrameW * 3, NULL, 0, NULL, &pixelBuffer); CVReturn err; // Y-plane glActiveTexture(GL_TEXTURE0); err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, videoTextureCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_RED_EXT, mTexW, mTexH, GL_RED_EXT, GL_UNSIGNED_BYTE, 0, &_lumaTexture); if (err) { NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err); } glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture)); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); // UV-plane glActiveTexture(GL_TEXTURE1); err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, videoTextureCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_RG_EXT, mTexW/2, mTexH/2, GL_RG_EXT, GL_UNSIGNED_BYTE, 1, &_chromaTexture); if (err) { NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err); } glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture)); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer); // Set the view port to the entire view glViewport(0, 0, backingWidth, backingHeight); static const GLfloat squareVertices[] = { 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -1.0f, -1.0f, }; GLfloat textureVertices[] = { 1, 1, 1, 0, 0, 1, 0, 0, }; // Draw the texture on the screen with OpenGL ES 2 [self renderWithSquareVertices:squareVertices textureVertices:textureVertices]; // Flush the CVOpenGLESTexture cache and release the texture CVOpenGLESTextureCacheFlush(videoTextureCache, 0); CVPixelBufferRelease(pixelBuffer); [moviePlayerDelegate bufferDone]; 

RenderWithSquareVertices Method

  - (void)renderWithSquareVertices:(const GLfloat*)squareVertices textureVertices:(const GLfloat*)textureVertices { // Use shader program. glUseProgram(shader.program); // Update attribute values. glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices); glEnableVertexAttribArray(ATTRIB_VERTEX); glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices); glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); // Present glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER]; 

}

My fragment shader:

 uniform sampler2D SamplerY; uniform sampler2D SamplerUV; varying highp vec2 _texcoord; void main() { mediump vec3 yuv; lowp vec3 rgb; yuv.x = texture2D(SamplerY, _texcoord).r; yuv.yz = texture2D(SamplerUV, _texcoord).rg - vec2(0.5, 0.5); // BT.601, which is the standard for SDTV is provided as a reference /* rgb = mat3( 1, 1, 1, 0, -.34413, 1.772, 1.402, -.71414, 0) * yuv;*/ // Using BT.709 which is the standard for HDTV rgb = mat3( 1, 1, 1, 0, -.18732, 1.8556, 1.57481, -.46813, 0) * yuv; gl_FragColor = vec4(rgb, 1); } 

Thank you very much,

+4
source share
1 answer

I assume the problem is that the YUV420 (or I420) is a three-line image format. The I420 is an 8-bit Y-plane, followed by 8-bit 2x2 sub-election U and V-planes. Code from GLCameraRipple expects NV12 format: an 8-bit Y-plane, followed by an alternating U / V plane with a 2x2 subsample. Given this, I expect you to need three textures. luma_tex, u_chroma_tex, v_chroma_tex.

Also note that GLCameraRipple may also expect a "video range". In other words, the values ​​for the planar format are: luma = [16,235] chroma = [16,240].

+1
source

Source: https://habr.com/ru/post/1399802/


All Articles