Optimized alternative to CGContextDrawImage

I currently work a lot with CoreGraphics on OSX.

I run Time Profiler on my code and find that the largest disconnect occurs in CGContextDrawImage. This is part of a loop that is called many times per second.

I have no way to optimize this code as such (as it is in the Apple libraries) - but I wonder if there is a faster alternative or a way to improve speed.

I use a CGContextDraw image after some blend mode code, for example: CGContextSetBlendMode(context, kCGBlendModeDifference); therefore alternative implementations would have to support mixing.

Time profiling results:

 3658.0ms 15.0% 0.0 CGContextDrawImage 3658.0ms 15.0% 0.0 ripc_DrawImage 3539.0ms 14.5% 0.0 ripc_AcquireImage 3539.0ms 14.5% 0.0 CGSImageDataLock 3539.0ms 14.5% 1.0 img_data_lock 3465.0ms 14.2% 0.0 img_interpolate_read 2308.0ms 9.4% 7.0 resample_band 1932.0ms 7.9% 1932.0 resample_byte_h_3cpp_vector 369.0ms 1.5% 369.0 resample_byte_v_Ncpp_vector 1157.0ms 4.7% 2.0 img_decode_read 1150.0ms 4.7% 8.0 decode_data 863.0ms 3.5% 863.0 decode_swap 267.0ms 1.0% 267.0 decode_byte_8bpc_3 

Update:

The actual source is what matches the following lines:

 ///////////////////////////////////////////////////////////////////////////////////////// - (CGImageRef)createBlendedImage:(CGImageRef)image secondImage:(CGImageRef)secondImage blendMode:(CGBlendMode)blendMode { // Get the image width and height size_t width = CGImageGetWidth(image); size_t height = CGImageGetHeight(image); // Set the frame CGRect frame = CGRectMake(0, 0, width, height); // Create context with alpha channel CGContextRef context = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(image), CGImageGetBytesPerRow(image), CGImageGetColorSpace(image), kCGImageAlphaPremultipliedLast); if (!context) { return nil; } // Draw the image inside the context CGContextSetBlendMode(context, kCGBlendModeCopy); CGContextDrawImage(context, frame, image); // Set the blend mode and draw the second image CGContextSetBlendMode(context, blendMode); CGContextDrawImage(context, frame, secondImage); // Get the masked image from the context CGImageRef blendedImage = CGBitmapContextCreateImage(context); CGContextRelease(context); return blendedImage; } ///////////////////////////////////////////////////////////////////////////////////////// - (CGImageRef)createImageTick { // `self.image` and `self.previousImage` are two instance properties (CGImageRefs) // Create blended image (stage one) CGImageRef stageOne = [self createBlendedImage:self.image secondImage:self.previousImage blendMode:kCGBlendModeXOR]; // Create blended image (stage two) if stage one image is 50% red CGImageRef stageTwo = nil; if ([self isImageRed:stageOne]) { stageTwo = [self createBlendedImage:self.image secondImage:stageOne blendMode:kCGBlendModeSourceAtop]; } // Release intermediate image CGImageRelease(stageOne); return stageTwo; } 
+5
source share
1 answer

@JeremyRoman et al. Thank you so much for your comments. I draw the same image a couple of times per cycle, in different contexts with different filters and combining with new images. Does oversampling switch from RGB to RGBA? What can I try to speed up or eliminate re-fetching? - Chris Nolet

This is what Core Image is used for. See the Master Image Programming Guide for more information. CGContext designed to render final images to a screen, which, apparently, is not your goal with every image you create.

+1
source

Source: https://habr.com/ru/post/1492703/


All Articles