I am currently experiencing artifacts when decoding a video using the ffmpegs api. On the fact that I would assume to be intermediate frames, artifacts are built slowly only from the active movement in the frame. These artifacts are built for 50-100 frames, until I assume that the key frame resets them. Then the frames are decoded correctly, and the artifacts continue to build again.
One thing that bothers me is a few examples of videos that run at 30 frames per second (h264), but all my 60fps videos (h264) are having a problem.
I currently do not have enough reputation to post the image, so hopefully this link will work. http://i.imgur.com/PPXXkJc.jpg
int numBytes; int frameFinished; AVFrame* decodedRawFrame; AVFrame* rgbFrame; //Enum class for decoding results, used to break decode loop when a frame is gathered DecodeResult retResult = DecodeResult::Fail; decodedRawFrame = av_frame_alloc(); rgbFrame = av_frame_alloc(); if (!decodedRawFrame) { fprintf(stderr, "Could not allocate video frame\n"); return DecodeResult::Fail; } numBytes = avpicture_get_size(PIX_FMT_RGBA, mCodecCtx->width,mCodecCtx->height); uint8_t* buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); avpicture_fill((AVPicture *) rgbFrame, buffer, PIX_FMT_RGBA, mCodecCtx->width, mCodecCtx->height); AVPacket packet; while(av_read_frame(mFormatCtx, &packet) >= 0 && retResult != DecodeResult::Success) { // Is this a packet from the video stream? if (packet.stream_index == mVideoStreamIndex) { // Decode video frame int decodeValue = avcodec_decode_video2(mCodecCtx, decodedRawFrame, &frameFinished, &packet); // Did we get a video frame? if (frameFinished)// && rgbFrame->pict_type != AV_PICTURE_TYPE_NONE ) { // Convert the image from its native format to RGB int SwsFlags = SWS_BILINEAR; // Accurate round clears up a problem where the start // of videos have green bars on them SwsFlags |= SWS_ACCURATE_RND; struct SwsContext *ctx = sws_getCachedContext(NULL, mCodecCtx->width, mCodecCtx->height, mCodecCtx->pix_fmt, mCodecCtx->width, mCodecCtx->height, PIX_FMT_RGBA, SwsFlags, NULL, NULL, NULL); sws_scale(ctx, decodedRawFrame->data, decodedRawFrame->linesize, 0, mCodecCtx->height, rgbFrame->data, rgbFrame->linesize); //if(count%5 == 0 && count < 105) // DebugSavePPMImage(rgbFrame, mCodecCtx->width, mCodecCtx->height, count); ++count; // Viewable frame is a struct to hold buffer and frame together in a queue ViewableFrame frame; frame.buffer = buffer; frame.frame = rgbFrame; mFrameQueue.push(frame); retResult = DecodeResult::Success; sws_freeContext(ctx); } } // Free the packet that was allocated by av_read_frame av_free_packet(&packet); } // Check for end of file leftover frames if(retResult != DecodeResult::Success) { int result = av_read_frame(mFormatCtx, &packet); if(result < 0) isEoF = true; av_free_packet(&packet); } // Free the YUV frame av_frame_free(&decodedRawFrame);
I am trying to create a queue of decoded frames, which I then use and free as needed. Is my separation of frames causing incorrect decoding of intermediate frames? I also break the decoding cycle as soon as I successfully assembled the frame (Decode :: Success, most of the examples that I saw tend to iterate over all the videos.
All information about the codec, information about the video stream, and the format context are configured exactly as shown in the main function https://github.com/chelyaev/ffmpeg-tutorial/blob/master/tutorial01.c
Any suggestions are welcome.