As far as I can tell, the problem is not maskImage:withMask: The incorrect output you posted is not wrong: where the pixels of image1 are black, image2 is not displayed.
The problem is probably either in the functions that you used to load image and mask , or in the code that produces the graphic output. In fact, it seems to me that you got the desired result, but you are using the wrong color space (shades of gray without alpha). Check if the image argument you supply is really in RGBa format, and if the returned UIImage not drawn on any other CGContext using DeviceGray as the color space.
Another possible reason that comes to my mind is that you have inverted image1 and image2. According to the documentation, mask should be scaled to image size (see below), but you have a small image. This seems reasonable also because image1 is in shades of gray, although when I tried to exchange images1 and image2, I got image1 unmodified as output.
To run the tests, I used attached images, and then I copied and pasted the method -(UIImage *)maskImage:(UIImage *)image withMask:(UIImage *)maskImage , as it is.
I used a simple iOS project with one view and UIImageView inside (with tag 1) and the following code in the controller:
- (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. UIImageView *imageView=(UIImageView *)[self.view viewWithTag:1]; UIImage *im1=[UIImage imageNamed:@"image1"]; UIImage *im2=[UIImage imageNamed:@"image2"]; imageView.image=[self maskImage:im2 withMask:im1]; }
I get the following output (this is correct):

If you mistakenly invert im1 with im2, you get image1 unmodified instead.
source share