How to perform a quick pixel filter on an image?

I have a little problem with my pixel image processing algorithm.

I load the image from the beginning into an array of type unsigned char* After that, when necessary, I change this data and update the image. This update takes too much time. Here is how I do it:

 CGDataProviderRef dataProvider = CGProviderCrateWithData(.....); CGImageRef cgImage = CGImageCreate(....); [imageView setImage:[UIImage imageWithCGImage:cgImage]]]; 

Everything works, but very slowly process a large image. I tried to run this in the background thread, but this did not help.

So basically it takes too much time. Does anyone know how to improve it?

+3
source share
5 answers

As suggested by others, you will want to offload this work from the CPU to the GPU in order to have any decent processing performance on these mobile devices.

To this end, I created an open source framework for iOS called GPUImage , which makes it relatively easy for such accelerated image processing. This requires support for OpenGL ES 2.0, but every iOS device sold in the last couple of years has this (statistics show about 97% of all iOS devices in this area).

As part of this structure, one of the source filters that I linked is pixel. The SimpleVideoFilter sample application shows how to use this, with a slider that controls the width of the pixel in the processed image:

Screenshot of pixellation filter application

This filter is the result of a fragmented shader with the following GLSL code:

  varying highp vec2 textureCoordinate; uniform sampler2D inputImageTexture; uniform highp fractionalWidthOfPixel; void main() { highp vec2 sampleDivisor = vec2(fractionalWidthOfPixel); highp vec2 samplePos = textureCoordinate - mod(textureCoordinate, sampleDivisor); gl_FragColor = texture2D(inputImageTexture, samplePos ); } 

In my tests, GPU-based filters like this perform 6-24X faster than equivalent processor-related processing routines for images and videos on iOS. The above structure should be easily implemented in the application, and the source code will be freely available for you to configure as you see fit.

+15
source

How to use a Core Image filter named CIPixellate ? Here is a snippet of code as I implemented it. You can play with kCIInputScaleKey to get the desired intensity:

 // initialize context and image CIContext *context = [CIContext contextWithOptions:nil]; CIImage *logo = [CIImage imageWithData:UIImagePNGRepresentation([UIImage imageNamed:@"test"])]; // set filter and properties CIFilter *filter = [CIFilter filterWithName:@"CIPixellate"]; [filter setValue:logo forKey:kCIInputImageKey]; [filter setValue:[[CIVector alloc] initWithX:150 Y:150] forKey:kCIInputCenterKey]; // default: 150, 150 [filter setValue:[NSNumber numberWithDouble:100.0] forKey:kCIInputScaleKey]; // default: 8.0 // render image CIImage *result = (CIImage *) [filter valueForKey:kCIOutputImageKey]; CGRect extent = result.extent; CGImageRef cgImage = [context createCGImage:result fromRect:extent]; // result UIImage *image = [[UIImage alloc] initWithCGImage:cgImage]; 

Here's the official Apple Filter Tutorial and List of Available Filters .

Update # 1

I just wrote a method to render in the background:

 - (void) pixelateImage:(UIImage *) image withIntensity:(NSNumber *) intensity completionHander:(void (^)(UIImage *pixelatedImage)) handler { // async task dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ // initialize context and image CIContext *context = [CIContext contextWithOptions:nil]; CIImage *logo = [CIImage imageWithData:UIImagePNGRepresentation(image)]; // set filter and properties CIFilter *filter = [CIFilter filterWithName:@"CIPixellate"]; [filter setValue:logo forKey:kCIInputImageKey]; [filter setValue:[[CIVector alloc] initWithX:150 Y:150] forKey:kCIInputCenterKey]; // default: 150, 150 [filter setValue:intensity forKey:kCIInputScaleKey]; // default: 8.0 // render image CIImage *result = (CIImage *) [filter valueForKey:kCIOutputImageKey]; CGRect extent = result.extent; CGImageRef cgImage = [context createCGImage:result fromRect:extent]; // result UIImage *image = [[UIImage alloc] initWithCGImage:cgImage]; // dispatch to main thread dispatch_async(dispatch_get_main_queue(), ^{ handler(image); }); }); } 

Name it as follows:

 [self pixelateImage:[UIImage imageNamed:@"test"] withIntensity:[NSNumber numberWithDouble:100.0] completionHander:^(UIImage *pixelatedImage) { self.logoImageView.image = pixelatedImage; }]; 
+3
source

The iPhone is not a great device for computing tasks such as image manipulation. If you want to improve performance when displaying images with very high resolution, perhaps while simultaneously performing some image processing tasks, use CATiledLayer . It is designed to display content in tiled fragments so that you can display / process content data only as needed on separate fragments.

+1
source

I agree with @Xorlev. The only thing I hope for will be (assuming you use a lot of floating point operations) that you build for arm6, and using thumb isa. In this case, compilation without the -mthumb option and performance may improve.

0
source

@Kai Burghardt's converted answer to Swift 3

 func pixelateImage(_ image: UIImage, withIntensity intensity: Int) -> UIImage { // initialize context and image let context = CIContext(options: nil) let logo = CIImage(data: UIImagePNGRepresentation(image)!)! // set filter and properties let filter = CIFilter(name: "CIPixellate") filter?.setValue(logo, forKey: kCIInputImageKey) filter?.setValue(CIVector(x:150,y:150), forKey: kCIInputCenterKey) filter?.setValue(intensity, forKey: kCIInputScaleKey) let result = filter?.value(forKey: kCIOutputImageKey) as! CIImage let extent = result.extent let cgImage = context.createCGImage(result, from: extent) // result let processedImage = UIImage(cgImage: cgImage!) return processedImage } 

calling this code as

 self.myImageView.image = pixelateImage(UIImage(named:"test"),100) 
0
source

Source: https://habr.com/ru/post/1403605/


All Articles