I convert ciimage to monochrome by trimming CICrop and running sobel to detect edges, the #if section below is used to display the result
CIImage *ci = [[CIImage alloc] initWithCGImage:uiImage.CGImage]; CIImage *gray = [CIFilter filterWithName:@"CIColorMonochrome" keysAndValues: @"inputImage", ci, @"inputColor", [[CIColor alloc] initWithColor:[UIColor whiteColor]], nil].outputImage; CGRect rect = [ci extent]; rect.origin = CGPointZero; CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width * 0.2, rect.size.height); CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width* 0.2 W:rect.size.height]; CIImage *left = [gray imageByCroppingToRect:cropRectLeft]; CIFilter *cropFilter = [CIFilter filterWithName:@"CICrop"]; [cropFilter setValue:left forKey:@"inputImage"]; [cropFilter setValue:cropRect forKey:@"inputRectangle"];
Now, when 3x3 convolution is not part of the ciimage pipeline, the part of the image in which I run edge detection is grayed out, but whenever the CIConvolution3X3 postfix is ββpart of the color processing pipeline magically returns. This happens whether I use the CIColorMonochrome or CIPhotoEffectMono prefix to remove color. Any ideas on how to keep the color to the very bottom of the pipeline? Tnx
UPD: no wonder running a raw regular monochrome kernel like this
kernel vec4 gray(sampler image) { vec4 s = sample(image, samplerCoord(image)); float r = (sr * .299 + sg * .587 + sb * 0.114) * sa; s = vec4(r, r, r, 1); return s; }
instead of using standard monophonic filters from apple, the same problem with color return when 3x3 convolution is part of my ci-conveyor leads to the same
source share