How do image pixel data “scan” image pixels?

Goal:

Search for the first black pixel on the left side of the image containing only black and transparent pixels.

What I have:

I know how to get pixel data and have an array of black and transparent pixels (found it here: https://stackoverflow.com/a/312502/ ):

+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count { NSMutableArray *result = [NSMutableArray arrayWithCapacity:count]; // First get the image into your data buffer CGImageRef imageRef = [image CGImage]; NSUInteger width = CGImageGetWidth(imageRef); NSUInteger height = CGImageGetHeight(imageRef); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); unsigned char *rawData = malloc(height * width * 4); NSUInteger bytesPerPixel = 4; NSUInteger bytesPerRow = bytesPerPixel * width; NSUInteger bitsPerComponent = 8; CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); CGContextRelease(context); // Now your rawData contains the image data in the RGBA8888 pixel format. int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel; for (int ii = 0 ; ii < count ; ++ii) { NSUInteger alpha = (rawData[byteIndex + 3] * 1.0) / 255.0; byteIndex += 4; [result addObject:[NSNumber numberWithInt:alpha]]; } free(rawData); return result; } 

What is the problem?

I cannot understand the order in which the function “scans” the image.

What I want is to get only the columns of the image and find the first column that has a non-constant pixel of list 1. so I will know how to crop the left, transparent side of the image?

How to get pixels by columns?

thanks

Shani

+4
source share
2 answers

Bytes are ordered from left to right, from top to bottom. Therefore, to do what you want, I think you want to rawData over rawData as follows:

 int x = 0; int y = 0; BOOL found = NO; for (x = 0; x < width; x++) { for (y = 0; y < height; y++) { unsigned char alphaByte = rawData[(y*bytesPerRow)+(x*bytesPerPixel)+3]; if (alphaByte > 0) { found = YES; break; } } if (found) break; } NSLog(@"First non-transparent pixel at %i, %i", x, y); 

Then your first column that contains an opaque pixel will be the x column.

+4
source

Typically, one could iterate over an array of images from top to bottom in rows and inside each row from left to right above the columns. In this case, you want the opposite: we want to iterate over each column, starting from the left, and inside the column we look through all the rows and check for the presence of a black pixel.

This will give you the leftmost black pixel:

 size_t maxIndex = height * bytesPerRow; for (size_t x = 0; x < bytesPerRow; x += bytesPerPixel) { for (size_t index = x; index < maxIndex; index += bytesPerRow) { if (rawData[index + 3] > 0) { goto exitLoop; } } } exitLoop: if (x < bytesPerRow) { x /= bytesPerPixel; // left most column is `x` } 

Well, that’s equal to magic, just a little optimized and neat: O

Although a goto usually allowed to leave two loops inside the inner loop, it is still ugly. Makes me really miss these great D flow control operations ...

The function you presented in the sample code does something else. It starts from a certain position in the image ( xx and yy is determined) and passes through count pixels going from the starting position to the right, continuing the next lines. It adds these alpha values ​​to some array that I suspect.

When xx = yy = 0 passed, it will find the topmost pixel with certain conditions, not the leftmost one. This conversion is defined by the code above. Recall that a 2D image is just a 1D array in memory, starting from the top line from left to right and continuing with the following lines. Performing simple math, you can iterate over rows or columns.

0
source

Source: https://habr.com/ru/post/1388902/


All Articles