I want to mask an image by passing another image as a mask. I can mask the image, but the resulting image does not look very good. It is serrated within the boundaries.
I think the problem is with the graphics of the retina. The scale property for the two images is different:
- The image I want to mask from has a scale value of 1. This image usually has a resolution of more than 1000x1000 pixels.
- The image, according to which I want to get the resulting image (only with black and white colors), has a scale value of 2. This image, as a rule, has a resolution of 300 × 300 pixels.
The resulting image has a scale value of 1.
The code I use is:
+ (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage { CGImageRef maskRef = maskImage.CGImage; CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef), CGImageGetBitsPerComponent(maskRef), CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false); CGImageRef masked = CGImageCreateWithMask([image CGImage], mask); CGImageRelease(mask); UIImage *maskedImage = [UIImage imageWithCGImage:masked ]; CGImageRelease(masked); return maskedImage; }
How can I get a masked image that follows the retinal scale?
source share