CVPixelBufferRef Optimization

I am working on a project in which I create a video from UIImage, with the code that I found here, and now I'm afraid for several days to optimize it (for about 300 images, it takes about 5 minutes to simulate and just crash on the device from for memory).

I will start with the working code that I have today (I work with the arc):

-(void) writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size duration:(int)duration { NSError *error = nil; AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie error:&error]; NSParameterAssert(videoWriter); NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264, AVVideoCodecKey, [NSNumber numberWithInt:size.width], AVVideoWidthKey, [NSNumber numberWithInt:size.height], AVVideoHeightKey, nil]; AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings]; AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:nil]; NSParameterAssert(writerInput); NSParameterAssert([videoWriter canAddInput:writerInput]); [videoWriter addInput:writerInput]; //Start a session: [videoWriter startWriting]; [videoWriter startSessionAtSourceTime:kCMTimeZero]; CVPixelBufferRef buffer = NULL; buffer = [self newPixelBufferFromCGImage:[[self.frames objectAtIndex:0] CGImage]]; CVPixelBufferPoolCreatePixelBuffer (NULL, adaptor.pixelBufferPool, &buffer); [adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero]; dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL); int frameNumber = [self.frames count]; [writerInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^{ NSLog(@"Entering block with frames: %i", [self.frames count]); if(!self.frames || [self.frames count] == 0) { return; } int i = 1; while (1) { if (i == frameNumber) { break; } if ([writerInput isReadyForMoreMediaData]) { freeMemory(); NSLog(@"inside for loop %d (%i)",i, [self.frames count]); UIImage *image = [self.frames objectAtIndex:i]; CGImageRef imageRef = [image CGImage]; CVPixelBufferRef sampleBuffer = [self newPixelBufferFromCGImage:imageRef]; CMTime frameTime = CMTimeMake(1, TIME_STEP); CMTime lastTime=CMTimeMake(i, TIME_STEP); CMTime presentTime=CMTimeAdd(lastTime, frameTime); if (sampleBuffer) { [adaptor appendPixelBuffer:sampleBuffer withPresentationTime:presentTime]; i++; CVPixelBufferRelease(sampleBuffer); } else { break; } } } [writerInput markAsFinished]; [videoWriter finishWriting]; self.frames = nil; CVPixelBufferPoolRelease(adaptor.pixelBufferPool); }]; } 

And now the function to get the pixel buffer I'm struggling with:

 - (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image { CVPixelBufferRef pxbuffer = NULL; int width = CGImageGetWidth(image)*2; int height = CGImageGetHeight(image)*2; NSMutableDictionary *attributes = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil]; CVPixelBufferPoolRef pixelBufferPool; CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &pixelBufferPool); NSParameterAssert(theError == kCVReturnSuccess); CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &pxbuffer); NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); CVPixelBufferLockBaseAddress(pxbuffer, 0); void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer); NSParameterAssert(pxdata != NULL); CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, width*4, rgbColorSpace, kCGImageAlphaNoneSkipFirst); NSParameterAssert(context); CGContextDrawImage(context, CGRectMake(0, 0, width, height), image); CGColorSpaceRelease(rgbColorSpace); CGContextRelease(context); CVPixelBufferUnlockBaseAddress(pxbuffer, 0); return pxbuffer; } 

The first strange thing: as you can see in this function, I need to multiply the width and height by 2, otherwise the video result is all messed up, and I can’t understand why (I can post screenshots, if that helps, the pixels seem to come from my image, but the width is wrong, and there is a big black square on the floor below the video).

Another problem is that it takes up a very large amount of memory; I think the pixel buffer may not be well freed, but I don't understand why.

Finally, it is very slow, but I have two ideas for improving it that I cannot use.

  • First, you should avoid using UIImage to create my pixel buffers, since I myself create a UIImage with data (uint8_t *). I tried using "CVPixelBufferCreateWithBytes", but that did not work. Here is how I tried:

     OSType pixFmt = CVPixelBufferGetPixelFormatType(pxbuffer); CVPixelBufferCreateWithBytes(kCFAllocatorDefault, width, height, pixFmt, self.composition.srcImage.resultImageData, width*2, NULL, NULL, (__bridge CFDictionaryRef) attributes, &pxbuffer); 

(The arguments are the same as for the above functions, my image data is encoded at 16 bits per pixel, and I could not find a good OSType argument to provide the function). If someone knows how to use it (maybe this is not possible with 16-bit / pixel data?), This will help me avoid a really useless conversion.

  • Secondly, I would like to avoid kCVPixelFormatType_32ARGB for my video. I suppose it would be faster to use something with fewer bits / pixels, but when I try it (I tried all the kCVPixelFormatType_16XXXXX formats, with the context created using 5 bits / component and kCGImageAlphaNoneSkipFirst), either it crashes or the resulting The video does not contain anything (with kCVPixelFormatType_16BE555).

I know that I ask a lot in only one post, but I lost in this code, I tried so many combinations, and none of them worked ...

+6
source share
1 answer

I have to multiply the width and height by 2, otherwise, the result video is all messed up, and I can't understand why

Dots versus pixels? High resolution mesh screens have twice as many pixels per dot.

0
source

Source: https://habr.com/ru/post/918096/


All Articles