Effectively convert AVFrame to QImage

I need to extract frames from a video in my Qt based application. Using the ffmpeg libraries, I can get the frames as AVFrames, which I need to convert to QImage for use in other parts of my application. This conversion must be effective. So far it seems that sws_scale() is the right function to use, but I'm not sure which source and destination pixel formats should be specified.

+4
source share
5 answers

I know this is too late, but maybe someone will find it useful. From here I have the key to doing the same conversion, which looks a little shorter.

So, I created a QImage that is reused for each decoded frame:

 QImage img( width, height, QImage::Format_RGB888 ); 

FrameRGB created:

 frameRGB = av_frame_alloc(); //Allocate memory for the pixels of a picture and setup the AVPicture fields for it. avpicture_alloc( ( AVPicture *) frameRGB, AV_PIX_FMT_RGB24, width, height); 

After the first frame is decoded, I create a SwsContext transform context in this way (it will be used for all of the following frames):

 mImgConvertCtx = sws_getContext( codecContext->width, codecContext->height, codecContext->pix_fmt, width, height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL); 

And finally, for each decoded frame transform is performed:

 if( 1 == framesFinished && nullptr != imgConvertCtx ) { //conversion frame to frameRGB sws_scale(imgConvertCtx, frame->data, frame->linesize, 0, codecContext->height, frameRGB->data, frameRGB->linesize); //setting QImage from frameRGB for( int y = 0; y < height; ++y ) memcpy( img.scanLine(y), frameRGB->data[0]+y * frameRGB->linesize[0], mWidth * 3 ); } 

See the link for more details.

+3
source

Came to the next two-step process, which first converts the decoded AVFame to another AVFrame in the RGB color space, and then to QImage . It works fast enough.

 src_frame = get_decoded_frame(); AVFrame *pFrameRGB = avcodec_alloc_frame(); // intermediate pframe if(pFrameRGB==NULL) { ;// Handle error } int numBytes= avpicture_get_size(PIX_FMT_RGB24, is->video_st->codec->width, is->video_st->codec->height); uint8_t *buffer = (uint8_t*)malloc(numBytes); avpicture_fill((AVPicture*)pFrameRGB, buffer, PIX_FMT_RGB24, is->video_st->codec->width, is->video_st->codec->height); int dst_fmt = PIX_FMT_RGB24; int dst_w = is->video_st->codec->width; int dst_h = is->video_st->codec->height; // TODO: cache following conversion context for speedup, // and recalculate only on dimension changes SwsContext *img_convert_ctx_temp; img_convert_ctx_temp = sws_getContext( is->video_st->codec->width, is->video_st->codec->height, is->video_st->codec->pix_fmt, dst_w, dst_h, (PixelFormat)dst_fmt, SWS_BICUBIC, NULL, NULL, NULL); QImage *myImage = new QImage(dst_w, dst_h, QImage::Format_RGB32); sws_scale(img_convert_ctx_temp, src_frame->data, src_frame->linesize, 0, is->video_st->codec->height, pFrameRGB->data, pFrameRGB->linesize); uint8_t *src = (uint8_t *)(pFrameRGB->data[0]); for (int y = 0; y < dst_h; y++) { QRgb *scanLine = (QRgb *) myImage->scanLine(y); for (int x = 0; x < dst_w; x=x+1) { scanLine[x] = qRgb(src[3*x], src[3*x+1], src[3*x+2]); } src += pFrameRGB->linesize[0]; } 

If you find a more efficient approach let me know in the comments

+5
source

A simpler approach, I think:

 void takeSnapshot(AVCodecContext* dec_ctx, AVFrame* frame) { SwsContext* img_convert_ctx; img_convert_ctx = sws_getContext(dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt, dec_ctx->width, dec_ctx->height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL); AVFrame* frameRGB = av_frame_alloc(); avpicture_alloc((AVPicture*)frameRGB, AV_PIX_FMT_RGB24, dec_ctx->width, dec_ctx->height); sws_scale(img_convert_ctx, frame->data, frame->linesize, 0, dec_ctx->height, frameRGB->data, frameRGB->linesize); QImage image(frameRGB->data[0], dec_ctx->width, dec_ctx->height, frameRGB->linesize[0], QImage::Format_RGB888); image.save("capture.png"); } 
+1
source

I just found that scanLine is just looking for a buffer. All you need to use is AV_PIX_FMT_RGB32 for AVFrame and QImage :: FORMAT_RGB32 for QImage.

Then after decoding just do memcpy

memcpy(img.scanLine(0), pFrameRGB->data[0], pFrameRGB->linesize[0] * pFrameRGB->height());

0
source

Today I tested directly passing image->bit() to swscale and finally it works, so it does not need to be copied to memory. For instance:

 /* 1. Get frame and QImage to show */ struct my_frame *frame = get_frame(source); QImage *myImage = new QImage(dst_w, dst_h, QImage::Format_RGBA8888); /* 2. Convert and write into image buffer */ uint8_t *dst[] = {myImage->bits()}; int linesizes[4]; av_image_fill_linesizes(linesizes, AV_PIX_FMT_RGBA, frame->width); sws_scale(myswscontext, frame->data, (const int*)frame->linesize, 0, frame->height, dst, linesizes); 
0
source

Source: https://habr.com/ru/post/1442322/


All Articles