Comparing AVFoundation buffer with a saved image

I read the first poster on StackOverflow for a long time and I must say that it was a great source of knowledge for me.

I am trying to find out the structure of AVFoundation.

What I want to do is to save what the camera sees , and then detect when something changes .

Here is the part where I save the image in UIImage:

if (shouldSetBackgroundImage) {
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(rowBase, bufferWidth,
        bufferHeight, 8, bytesPerRow,
        colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context); 

    // Free up the context and color space
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage * image = [UIImage imageWithCGImage:quartzImage];
    [self setBackgroundImage:image];
    NSLog(@"reference image actually set");

    // Release the Quartz image
    CGImageRelease(quartzImage);

    //Signal that the image has been saved
    shouldSetBackgroundImage = NO;

}

and here is the part where I check if there are any changes in the image observed by the camera:

else {

    CGImageRef cgImage = [backgroundImage CGImage];
    CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
    CFDataRef bitmapData = CGDataProviderCopyData(provider);
    char* data = CFDataGetBytePtr(bitmapData);

    if (data != NULL)
    {
        int64_t numDiffer = 0, pixelCount = 0;
        NSMutableArray * pointsMutable = [NSMutableArray array];

        for( int row = 0; row < bufferHeight; row += 8 ) {
            for( int column = 0; column < bufferWidth; column += 8 ) {

                //we get one pixel from each source (buffer and saved image)
                unsigned char *pixel = rowBase + (row * bytesPerRow) + (column * BYTES_PER_PIXEL);
                unsigned char *referencePixel = data + (row * bytesPerRow) + (column * BYTES_PER_PIXEL);

                pixelCount++;

                if ( !match(pixel, referencePixel, matchThreshold) ) {
                    numDiffer++;
                    [pointsMutable addObject:[NSValue valueWithCGPoint:CGPointMake(SCREEN_WIDTH - (column/ (float) bufferHeight)* SCREEN_WIDTH - 4.0, (row/ (float) bufferWidth)* SCREEN_HEIGHT- 4.0)]];
                }
            }
        }
        numberOfPixelsThatDiffer = numDiffer;
        points = [pointsMutable copy];
    }

For some reason this does not work, which means that the iPhone detects almost everything as different from the saved image, although I set a very low threshold for detection in the matching function ...

Do you have any idea what I'm doing wrong?

+3
2

, , , : , iPhone. , , , .

, , UIImage, . , , UIImage.

, , / . , , - .

, , . , . , , .

, , OpenGL ES 2.0 . (14-28X ) . , , iPhone, GLSL.

+1

( ), . DO, , !

(, ): , 1 ?! , ?. .

: , . , .

, , . , , .. OpenCV , , iPhone. ( )

, , .

, , , , , Image Processing/Computer Vision.

, ;)

+1

Source: https://habr.com/ru/post/1785748/


All Articles