CIImage (IOS): adding a 3x3 convolution after a monochrome filter somehow restores the color

I convert ciimage to monochrome by trimming CICrop and running sobel to detect edges, the #if section below is used to display the result

CIImage *ci = [[CIImage alloc] initWithCGImage:uiImage.CGImage]; CIImage *gray = [CIFilter filterWithName:@"CIColorMonochrome" keysAndValues: @"inputImage", ci, @"inputColor", [[CIColor alloc] initWithColor:[UIColor whiteColor]], nil].outputImage; CGRect rect = [ci extent]; rect.origin = CGPointZero; CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width * 0.2, rect.size.height); CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width* 0.2 W:rect.size.height]; CIImage *left = [gray imageByCroppingToRect:cropRectLeft]; CIFilter *cropFilter = [CIFilter filterWithName:@"CICrop"]; [cropFilter setValue:left forKey:@"inputImage"]; [cropFilter setValue:cropRect forKey:@"inputRectangle"]; // The sobel convoloution will produce an image that is 0.5,0.5,0.5,0.5 whereever the image is flat // On edges the image will contain values that deviate from that based on the strength and // direction of the edge const double g = 1.; const CGFloat weights[] = { 1*g, 0, -1*g, 2*g, 0, -2*g, 1*g, 0, -1*g}; left = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues: @"inputImage", cropFilter.outputImage, @"inputWeights", [CIVector vectorWithValues:weights count:9], @"inputBias", @0.5, nil].outputImage; #define VISUALHELP 1 #if VISUALHELP CGImageRef imageRefLeft = [gcicontext createCGImage:left fromRect:cropRectLeft]; CGContextDrawImage(context, cropRectLeft, imageRefLeft); CGImageRelease(imageRefLeft); #endif 

Now, when 3x3 convolution is not part of the ciimage pipeline, the part of the image in which I run edge detection is grayed out, but whenever the CIConvolution3X3 postfix is ​​part of the color processing pipeline magically returns. This happens whether I use the CIColorMonochrome or CIPhotoEffectMono prefix to remove color. Any ideas on how to keep the color to the very bottom of the pipeline? Tnx

UPD: no wonder running a raw regular monochrome kernel like this

 kernel vec4 gray(sampler image) { vec4 s = sample(image, samplerCoord(image)); float r = (sr * .299 + sg * .587 + sb * 0.114) * sa; s = vec4(r, r, r, 1); return s; } 

instead of using standard monophonic filters from apple, the same problem with color return when 3x3 convolution is part of my ci-conveyor leads to the same

+5
source share
2 answers

This problem is that CI convolution operations (for example, CIConvolution3X3, CIConvolution5X5 and CIGaussianBlur) work on all four channels of the input image. This means that in your code example, the resulting alpha channel will be 0.5, where you probably want it to be 1.0. Try adding a simple core after convolution to set the alpha back to 1.

+4
source

: I abandoned coreimage for this task. It seems that using two instances of CIFilter or CIKernel is causing a conflict. Someone somewhere in the image of the insides seems to mismanage the gles state and thus reverse what went wrong, which is more expensive than using something other than the main image (with custom ci filters that work on ios8 only in any case) gpuimage seems to be less buggy and easier to maintain / debug (without my participation)

+2
source

Source: https://habr.com/ru/post/1205672/


All Articles