Effective use of Core Image with AV Foundation

I am writing an iOS application that applies filters to existing video files and displays the results on new ones. I originally tried using Brad Larson nice framework, GPUImage . Although I was able to output the filter files effortlessly, the result was not perfect: the videos were the correct length, but some frames were missing and others were duplicated (see Issue 1501 for more information). I plan to learn more about OpenGL ES in order to better investigate the problem with dropped / dropped frames. However, at the same time, I am exploring other options for rendering my video files.

I am already familiar with Core Image, so I decided to use it in an alternative video filter. Inside the block passed in AVAssetWriterInput requestMediaDataWhenReadyOnQueue:usingBlock:, I filter and display each frame of the input video file as follows:

CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
    CMTime presentationTimeStamp = CMSampleBufferGetOutputPresentationTimeStamp(sampleBuffer);

    CVPixelBufferRef inputPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage* frame = [CIImage imageWithCVPixelBuffer:inputPixelBuffer];
    // a CIFilter created outside the "isReadyForMoreMediaData" loop
    [screenBlend setValue:frame forKey:kCIInputImageKey];

    CVPixelBufferRef outputPixelBuffer;
    CVReturn result = CVPixelBufferPoolCreatePixelBuffer(NULL, assetWriterInputPixelBufferAdaptor.pixelBufferPool, &outputPixelBuffer);

    // verify that everything gonna be ok
    NSAssert(result == kCVReturnSuccess, @"CVPixelBufferPoolCreatePixelBuffer failed with error code");
    NSAssert(CVPixelBufferGetPixelFormatType(outputPixelBuffer) == kCVPixelFormatType_32BGRA, @"Wrong pixel format");

    [self.coreImageContext render:screenBlend.outputImage toCVPixelBuffer:outputPixelBuffer];
    BOOL success = [assetWriterInputPixelBufferAdaptor appendPixelBuffer:outputPixelBuffer withPresentationTime:presentationTimeStamp];
    CVPixelBufferRelease(outputPixelBuffer);
    CFRelease(sampleBuffer);
    sampleBuffer = NULL;
    completedOrFailed = !success;
}

This works well: rendering seems fast enough and there are no missing or duplicated frames as a result of the video file. However, I'm not sure my code is as efficient as possible. In particular, my questions are:

  • Does this approach allow the device to save all the frame data on the GPU, or are there any methods (such as imageWithCVPixelBuffer:or render:toCVPixelBuffer:) that prematurely copy pixels to the CPU?
  • Would it be more efficient to use CIContext drawImage:inRect:fromRect:for drawing in the context of OpenGLES?
  • # 2 , drawImage:inRect:fromRect: CVPixelBufferRef, ?

CIContext drawImage:inRect:fromRect: , . , GPUImageMovieWriter - , a) , ) , .

+4

Source: https://habr.com/ru/post/1535286/


All Articles