IOS: getting a rectangle-shaped image from a background image

I am working on an implementation where I have a rectangular image in a large background image. I am trying to programmatically extract a rectangle image from a large image and extract text information from this particular rectangle image. I am trying to use a third-party Open-CV structure, but could not extract the rectangle image from the large background image. Can someone please guide me, how can I achieve this?

UPDATED:

I found a Link to find out square shapes using OpenCV. Can I change it to search for rectangle shapes? Can someone help me with this?

LATEST UPDATED:

I finally got the code, here it is below.

- (cv::Mat)cvMatWithImage:(UIImage *)image { CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage); CGFloat cols = image.size.width; CGFloat rows = image.size.height; cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data cols, // Width of bitmap rows, // Height of bitmap 8, // Bits per component cvMat.step[0], // Bytes per row colorSpace, // Colorspace kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault); // Bitmap info flags CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage); CGContextRelease(contextRef); return cvMat; } -(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat { NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()]; CGColorSpaceRef colorSpace; if ( cvMat.elemSize() == 1 ) { colorSpace = CGColorSpaceCreateDeviceGray(); } else { colorSpace = CGColorSpaceCreateDeviceRGB(); } //CFDataRef data; CGDataProviderRef provider = CGDataProviderCreateWithCFData( (CFDataRef) data ); // It SHOULD BE (__bridge CFDataRef)data CGImageRef imageRef = CGImageCreate( cvMat.cols, cvMat.rows, 8, 8 * cvMat.elemSize(), cvMat.step[0], colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault ); UIImage *finalImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease( imageRef ); CGDataProviderRelease( provider ); CGColorSpaceRelease( colorSpace ); return finalImage; } -(void)forOpenCV { imageView = [UIImage imageNamed:@"myimage.jpg"]; if( imageView != nil ) { cv::Mat tempMat = [imageView CVMat]; cv::Mat greyMat = [self cvMatWithImage:imageView]; cv::vector<cv::vector<cv::Point> > squares; cv::Mat img= [self debugSquares: squares: greyMat]; imageView = [self UIImageFromCVMat: img]; self.imageView.image = imageView; } } double angle( cv::Point pt1, cv::Point pt2, cv::Point pt0 ) { double dx1 = pt1.x - pt0.x; double dy1 = pt1.y - pt0.y; double dx2 = pt2.x - pt0.x; double dy2 = pt2.y - pt0.y; return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10); } - (cv::Mat) debugSquares: (std::vector<std::vector<cv::Point> >) squares : (cv::Mat &)image { NSLog(@"%lu",squares.size()); // blur will enhance edge detection //cv::Mat blurred(image); cv::Mat blurred = image.clone(); medianBlur(image, blurred, 9); cv::Mat gray0(image.size(), CV_8U), gray; cv::vector<cv::vector<cv::Point> > contours; // find squares in every color plane of the image for (int c = 0; c < 3; c++) { int ch[] = {c, 0}; mixChannels(&image, 1, &gray0, 1, ch, 1); // try several threshold levels const int threshold_level = 2; for (int l = 0; l < threshold_level; l++) { // Use Canny instead of zero threshold level! // Canny helps to catch squares with gradient shading if (l == 0) { Canny(gray0, gray, 10, 20, 3); // // Dilate helps to remove potential holes between edge segments dilate(gray, gray, cv::Mat(), cv::Point(-1,-1)); } else { gray = gray0 >= (l+1) * 255 / threshold_level; } // Find contours and store them in a list findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE); // Test contours cv::vector<cv::Point> approx; for (size_t i = 0; i < contours.size(); i++) { // approximate contour with accuracy proportional // to the contour perimeter approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true); // Note: absolute value of an area is used because // area may be positive or negative - in accordance with the // contour orientation if (approx.size() == 4 && fabs(contourArea(cv::Mat(approx))) > 1000 && isContourConvex(cv::Mat(approx))) { double maxCosine = 0; for (int j = 2; j < 5; j++) { double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1])); maxCosine = MAX(maxCosine, cosine); } if (maxCosine < 0.3) squares.push_back(approx); } } } } NSLog(@"squares.size(): %lu",squares.size()); for( size_t i = 0; i < squares.size(); i++ ) { cv::Rect rectangle = boundingRect(cv::Mat(squares[i])); NSLog(@"rectangle.x: %d", rectangle.x); NSLog(@"rectangle.y: %d", rectangle.y); if(i==squares.size()-1)////Detecting Rectangle here { const cv::Point* p = &squares[i][0]; int n = (int)squares[i].size(); NSLog(@"%d",n); line(image, cv::Point(507,418), cv::Point(507+1776,418+1372), cv::Scalar(255,0,0),2,8); polylines(image, &p, &n, 1, true, cv::Scalar(255,255,0), 5, CV_AA); int fx1=rectangle.x; NSLog(@"X: %d", fx1); int fy1=rectangle.y; NSLog(@"Y: %d", fy1); int fx2=rectangle.x+rectangle.width; NSLog(@"Width: %d", fx2); int fy2=rectangle.y+rectangle.height; NSLog(@"Height: %d", fy2); line(image, cv::Point(fx1,fy1), cv::Point(fx2,fy2), cv::Scalar(0,0,255),2,8); } } return image; } 

Thank.

+5
ios opencv
Dec 19 '12 at 18:00
source share
2 answers

Here is the complete answer, using a small wrapper class to separate C ++ code from objective-c.

I had to use https://stackoverflow.com/a/3126322/ to get around my weak C ++ knowledge, but I developed everything we needed to have a clean C ++ interface with objective-c code using the squares.cpp code example squares.cpp as an example. The goal is to keep the C ++ source code as old as possible and keep the bulk of working with openCV in pure C ++ files for (im) portability.

I left my original answer in place, as that seems to be beyond the scope of editing. The full demo project is on github

CVViewController.h / CVViewController.m

  • pure Objective-C

  • communicates with openCV C ++ code through WRAPPER ... he does not know and does not care that C ++ processes these method calls behind the shell.

CVWrapper.h / CVWrapper.mm

  • objective-C ++

does as little as possible, in fact only two things ...

  • calls the UIImage objC ++ categories to convert to and from UIImage <> cv :: Mat
  • mediates obj-C methods of CVViewController and calls to CVSquares C ++ (class) functions

CVSquares.h / CVSquares.cpp

  • pure c ++
  • CVSquares.cpp declares public functions inside the class definition (in this case, one static function).
    This replaces the work of main{} in the source file.
  • We try to keep CVSquares.cpp as close as possible to the original C ++ for portability.

CVViewController.m

 //remove 'magic numbers' from original C++ source so we can manipulate them from obj-C #define TOLERANCE 0.01 #define THRESHOLD 50 #define LEVELS 9 UIImage* image = [CVSquaresWrapper detectedSquaresInImage:self.image tolerance:TOLERANCE threshold:THRESHOLD levels:LEVELS]; 

CVSquaresWrapper.h

 // CVSquaresWrapper.h #import <Foundation/Foundation.h> @interface CVSquaresWrapper : NSObject + (UIImage*) detectedSquaresInImage:(UIImage*)image tolerance:(CGFloat)tolerance threshold:(NSInteger)threshold levels:(NSInteger)levels; @end 

CVSquaresWrapper.mm

 // CVSquaresWrapper.mm // wrapper that talks to c++ and to obj-c classes #import "CVSquaresWrapper.h" #import "CVSquares.h" #import "UIImage+OpenCV.h" @implementation CVSquaresWrapper + (UIImage*) detectedSquaresInImage:(UIImage*) image tolerance:(CGFloat)tolerance threshold:(NSInteger)threshold levels:(NSInteger)levels { UIImage* result = nil; //convert from UIImage to cv::Mat openCV image format //this is a category on UIImage cv::Mat matImage = [image CVMat]; //call the c++ class static member function //we want this function signature to exactly //mirror the form of the calling method matImage = CVSquares::detectedSquaresInImage (matImage, tolerance, threshold, levels); //convert back from cv::Mat openCV image format //to UIImage image format (category on UIImage) result = [UIImage imageFromCVMat:matImage]; return result; } @end 

CVSquares.h

 // CVSquares.h #ifndef __OpenCVClient__CVSquares__ #define __OpenCVClient__CVSquares__ //class definition //in this example we do not need a class //as we have no instance variables and just one static function. //We could instead just declare the function but this form seems clearer class CVSquares { public: static cv::Mat detectedSquaresInImage (cv::Mat image, float tol, int threshold, int levels); }; #endif /* defined(__OpenCVClient__CVSquares__) */ 

CVSquares.cpp

 // CVSquares.cpp #include "CVSquares.h" using namespace std; using namespace cv; static int thresh = 50, N = 11; static float tolerance = 0.01; //declarations added so that we can move our //public function to the top of the file static void findSquares( const Mat& image, vector<vector<Point> >& squares ); static void drawSquares( Mat& image, vector<vector<Point> >& squares ); //this public function performs the role of //main{} in the original file (main{} is deleted) cv::Mat CVSquares::detectedSquaresInImage (cv::Mat image, float tol, int threshold, int levels) { vector<vector<Point> > squares; if( image.empty() ) { cout << "Couldn't load " << endl; } tolerance = tol; thresh = threshold; N = levels; findSquares(image, squares); drawSquares(image, squares); return image; } // the rest of this file is identical to the original squares.cpp except: // main{} is removed // this line is removed from drawSquares: // imshow(wndname, image); // (obj-c will do the drawing) 

UIImage + OpenCV.h

The UIImage category is an objC ++ file containing code for converting between the UIImage and cv :: Mat image formats. Here you are moving two methods -(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat and - (cv::Mat)cvMatWithImage:(UIImage *)image

 //UIImage+OpenCV.h #import <UIKit/UIKit.h> @interface UIImage (UIImage_OpenCV) //cv::Mat to UIImage + (UIImage *)imageFromCVMat:(cv::Mat&)cvMat; //UIImage to cv::Mat - (cv::Mat)CVMat; @end 

The implementation of the methods here does not change from your code (although we do not pass UIImage for conversion, instead we refer to self )

+6
Jan 02 '13 at 14:19
source share

Here is a partial answer. This is not complete, because I try to do the same and experience tremendous difficulties at every turn. My knowledge is pretty strong in objective-c, but really weak in C ++

You should read this C ++ wrapper guide.

And all on Ievgen Khvedchenia Computer Vision Discussion , especially the openCV tutorial . Ievgen also posted a surprisingly complete project on github to go with the tutorial.

Having said that, I still ran into big problems getting openCV to compile and start a smooth transition.

For example, the Ievgen tutorial works just fine, like a finished project, but if I try to recreate it from scratch, I get the same openCV compilation errors that constantly haunted me. This is probably my poor understanding of C ++ and integration with obj-C.

Regarding squares.cpp

What you need to do

  • remove int main(int /*argc*/, char** /*argv*/) from squares.cpp
  • remove imshow(wndname, image); from drawSquares (obj-c will execute the drawing)
  • create squares.h header file
  • make one or two public functions in the header file that you can call from obj-c (or from obj-c / C ++ wrapper)

Here is what I still have ...

 class squares { public: static cv::Mat& findSquares( const cv::Mat& image, cv::vector<cv::vector<cv::Point> >& squares ); static cv::Mat& drawSquares( cv::Mat& image, const cv::vector<cv::vector<cv::Point> >& squares ); }; 

you should be able to reduce this to one method, for example processSquares with one input cv::Mat& image and one return cv::Mat& image . This method will declare squares and call findSquares and drawSquares in the .cpp file.

The wrapper will take the input of UIImage, convert it to cv::Mat image , call processSquares with this input and get the result cv::Mat image . As a result, it will return to NSImage and return to the objc call function.

SO, that neat sketch of what we need to do, I will try to expand this answer as soon as I manage to do it!

+2
Dec 27 '12 at 13:12
source share



All Articles