Why filtering a cropped image is 4 times slower than filtering a cropped image (both are the same size)

I am trying to get around this problem with no luck. I have a very simple Swift command-line application that takes one argument - the path to the image to load. It processes the image and filters the image fragment using the SepiaTone filter.

It works great. It crop the image to 200x200 and filters it using SepiaTone. Now here is the problem I am facing - the whole process takes 600 ms on my MacBook Air. Now, when I change (instead of cropping) the input image to the same size (200x200), it takes 150 ms .

Why? In both cases, I filter the image with a size of 200x200 . I am using this specific image for testing (5966x3978).

UPDATE:

This is a specific line of code that takes 4 times as much when dealing with a cropped image:

var ciImage:CIImage = CIImage(cgImage: cgImage) 

END OF UPDATE

Code for trimming (200x200):

 // parse args and get image path let args:Array = CommandLine.arguments let inputFile:String = args[CommandLine.argc - 1] let inputURL:URL = URL(fileURLWithPath: inputFile) // load the image from path into NSImage // and convert NSImage into CGImage guard let nsImage = NSImage(contentsOf: inputURL), var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil) else { exit(EXIT_FAILURE) } // CROP THE IMAGE TO 200x200 // THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT // IN THOSE TWO EXAMPLES let rect = CGRect(x: 0, y: 0, width: 200, height: 200) if let croppedImage = cgImage.cropping(to: rect) { cgImage = croppedImage } else { exit(EXIT_FAILURE) } // END CROPPING // convert CGImage to CIImage var ciImage:CIImage = CIImage(cgImage: cgImage) // initiate SepiaTone guard let sepiaFilter = CIFilter(name: "CISepiaTone") else { exit(EXIT_FAILURE) } sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey) sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey) guard let result = sepiaFilter.outputImage else { exit(EXIT_FAILURE) } let context:CIContext = CIContext() // perform filtering in a GPU context guard let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent) else { exit(EXIT_FAILURE) } 

Code for resizing (200x200):

 // parse args and get image path let args:Array = CommandLine.arguments let inputFile:String = args[CommandLine.argc - 1] let inputURL:URL = URL(fileURLWithPath: inputFile) // load the image from path into NSImage // and convert NSImage into CGImage guard let nsImage = NSImage(contentsOf: inputURL), var cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil) else { exit(EXIT_FAILURE) } // RESIZE THE IMAGE TO 200x200 // THIS IS THE ONLY BLOCK OF CODE THAT IS DIFFERENT // IN THOSE TWO EXAMPLES guard let CGcontext = CGContext(data: nil, width: 200, height: 200, bitsPerComponent: cgImage.bitsPerComponent, bytesPerRow: cgImage.bytesPerRow, space: cgImage.colorSpace ?? CGColorSpaceCreateDeviceRGB(), bitmapInfo: cgImage.bitmapInfo.rawValue) else { exit(EXIT_FAILURE) } CGcontext.draw(cgImage, in: CGRect(x: 0, y: 0, width: 200, height: 200)) if let resizeOutput = CGcontext.makeImage() { cgImage = resizeOutput } // END RESIZING // convert CGImage to CIImage var ciImage:CIImage = CIImage(cgImage: cgImage) // initiate SepiaTone guard let sepiaFilter = CIFilter(name: "CISepiaTone") else { exit(EXIT_FAILURE) } sepiaFilter.setValue(ciImage, forKey: kCIInputImageKey) sepiaFilter.setValue(0.5, forKey: kCIInputIntensityKey) guard let result = sepiaFilter.outputImage else { exit(EXIT_FAILURE) } let context:CIContext = CIContext() // perform filtering in a GPU context guard let output = context.createCGImage(sepiaFilter.outputImage!, from: ciImage.extent) else { exit(EXIT_FAILURE) } 
+5
source share
2 answers

It looks like you are doing two different things. In the "slow" version, you crop (as when taking a small CGRect of the original image), and in the "fast" version you resize (as when reducing the original to CGRect).

You can prove this by adding two UIImageViews and adding these lines after each ciImage declaration:

  slowImage.image = UIImage(ciImage: ciImage) fastImage.image = UIImage(ciImage: ciImage) 

Here are two screenshots of the simulator, with a โ€œslowโ€ image above the โ€œfastโ€ image. The first is with your code, where the "slow" start is CGRect (0,0), and the second is with (2000,2000):

Origin - (0,0) Origin (0,0) Origin (2000,2000) Origin (2000,2000)

Knowing this, I can come up with a few things that happen over time.

I am including a link to Apple's documentation on the trim feature. He explains that he does some CGRect calculations behind the scenes, but doesn't explain how he pulls pixel bits out of a full-size CG image - I think where the real slowdown happens.

After all, it seems like time is tied to doing two completely different things.

CGRect.cropping (to :)

+1
source

Most likely, cgImage lives in video memory, and when you scale the image, it actually uses hardware to write the image to a new memory area. When you crop cgImage, the documentation implies that it simply refers to the original image. Line

var ciImage: CIImage = CIImage (cgImage: cgImage)

should start reading (perhaps to main memory?), and in the case of a scaled image, it can probably just read the entire buffer continuously. In the case of a cropped image, he can read it line by line, and this can explain the difference, but I only guess.

+1
source

Source: https://habr.com/ru/post/1258593/


All Articles