After reading some examples, reading the source code, and some helping people, I managed to run the code. I used transcoding and coding examples and mixed them up. Here is my code
Here are the main points: 1- libswscale should be used to convert an AVFrame with the required package format for submission to openCV Mat. For this we define
struct SwsContext *sws_ctx = NULL; sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_BGR24, SWS_BICUBIC, NULL, NULL, NULL );
To convert opencv Mat back to AVFrame, you need to use swscale again and convert the opencv BGR frame format to YUV. So, I am doing this:
struct SwsContext *sws_ctx_bgr_yuv = NULL; sws_ctx_bgr_yuv = sws_getContext(pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_BGR24, pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt
And here is the read / decode / coding frame cycle:
while (1) { if ((ret = av_read_frame(ifmt_ctx, &packet)) < 0) break; stream_index = packet.stream_index; type = ifmt_ctx->streams[packet.stream_index]->codec->codec_type; av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n", stream_index); if (filter_ctx[stream_index].filter_graph) { av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n"); frame = av_frame_alloc(); if (!frame) { ret = AVERROR(ENOMEM); break; } av_packet_rescale_ts(&packet, ifmt_ctx->streams[stream_index]->time_base, ifmt_ctx->streams[stream_index]->codec->time_base); dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 : avcodec_decode_audio4; ret = dec_func(ifmt_ctx->streams[stream_index]->codec, frame, &got_frame, &packet); if (ret < 0) { av_frame_free(&frame); av_log(NULL, AV_LOG_ERROR, "Decoding failed\n"); break; } if (got_frame) { if(stream_index==video_index){ sws_scale(sws_ctx, (uint8_t const * const *)frame->data, frame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize); cv::Mat img(frame->height,frame->width,CV_8UC3,pFrameRGB->data[0]); img=manipulate_image(img);