FFMPEG decoding artifacts between keyframes

I am currently experiencing artifacts when decoding a video using the ffmpegs api. On the fact that I would assume to be intermediate frames, artifacts are built slowly only from the active movement in the frame. These artifacts are built for 50-100 frames, until I assume that the key frame resets them. Then the frames are decoded correctly, and the artifacts continue to build again.

One thing that bothers me is a few examples of videos that run at 30 frames per second (h264), but all my 60fps videos (h264) are having a problem.

I currently do not have enough reputation to post the image, so hopefully this link will work. http://i.imgur.com/PPXXkJc.jpg

int numBytes; int frameFinished; AVFrame* decodedRawFrame; AVFrame* rgbFrame; //Enum class for decoding results, used to break decode loop when a frame is gathered DecodeResult retResult = DecodeResult::Fail; decodedRawFrame = av_frame_alloc(); rgbFrame = av_frame_alloc(); if (!decodedRawFrame) { fprintf(stderr, "Could not allocate video frame\n"); return DecodeResult::Fail; } numBytes = avpicture_get_size(PIX_FMT_RGBA, mCodecCtx->width,mCodecCtx->height); uint8_t* buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); avpicture_fill((AVPicture *) rgbFrame, buffer, PIX_FMT_RGBA, mCodecCtx->width, mCodecCtx->height); AVPacket packet; while(av_read_frame(mFormatCtx, &packet) >= 0 && retResult != DecodeResult::Success) { // Is this a packet from the video stream? if (packet.stream_index == mVideoStreamIndex) { // Decode video frame int decodeValue = avcodec_decode_video2(mCodecCtx, decodedRawFrame, &frameFinished, &packet); // Did we get a video frame? if (frameFinished)// && rgbFrame->pict_type != AV_PICTURE_TYPE_NONE ) { // Convert the image from its native format to RGB int SwsFlags = SWS_BILINEAR; // Accurate round clears up a problem where the start // of videos have green bars on them SwsFlags |= SWS_ACCURATE_RND; struct SwsContext *ctx = sws_getCachedContext(NULL, mCodecCtx->width, mCodecCtx->height, mCodecCtx->pix_fmt, mCodecCtx->width, mCodecCtx->height, PIX_FMT_RGBA, SwsFlags, NULL, NULL, NULL); sws_scale(ctx, decodedRawFrame->data, decodedRawFrame->linesize, 0, mCodecCtx->height, rgbFrame->data, rgbFrame->linesize); //if(count%5 == 0 && count < 105) // DebugSavePPMImage(rgbFrame, mCodecCtx->width, mCodecCtx->height, count); ++count; // Viewable frame is a struct to hold buffer and frame together in a queue ViewableFrame frame; frame.buffer = buffer; frame.frame = rgbFrame; mFrameQueue.push(frame); retResult = DecodeResult::Success; sws_freeContext(ctx); } } // Free the packet that was allocated by av_read_frame av_free_packet(&packet); } // Check for end of file leftover frames if(retResult != DecodeResult::Success) { int result = av_read_frame(mFormatCtx, &packet); if(result < 0) isEoF = true; av_free_packet(&packet); } // Free the YUV frame av_frame_free(&decodedRawFrame); 

I am trying to create a queue of decoded frames, which I then use and free as needed. Is my separation of frames causing incorrect decoding of intermediate frames? I also break the decoding cycle as soon as I successfully assembled the frame (Decode :: Success, most of the examples that I saw tend to iterate over all the videos.

All information about the codec, information about the video stream, and the format context are configured exactly as shown in the main function https://github.com/chelyaev/ffmpeg-tutorial/blob/master/tutorial01.c

Any suggestions are welcome.

+3
source share
1 answer

For reference, if someone is in a similar position. There seems to be a problem with some of the older versions of FFMPEG when using sws_scale to transform the image and not change the actual size of the final frame. If instead you create a flag for SwsContext using:

int SwsFlags = SWS_BILINEAR; // Anything You Want SwsFlags | = SWS_ACCURATE_RND; // Under the hood, forces ffmpeg to use the same logic as scaling

SWS_ACCURATE_RND has a performance limitation, but for a regular video this is probably not so noticeable. This will remove the splash of green or green stripes at the edges of the texture, if any.

I wanted to thank Multimedia Mike and George Y, they were also right in that, as I decoded the frame, it didn’t save the packets correctly, and this caused the creation of video artifacts from previous frames.

+2
source

Source: https://habr.com/ru/post/1013905/


All Articles