OpenCV pointer for bitmap processing

I created a generic loop detection library loaded from a Delphi / Lazarus application. The main application passes a pointer to a bitmap to be processed by the function inside the library.

Here's the function inside the library. The "img" parameter is a pointer to my bitmap.

extern "C" { void detect_contour(int imgWidth, int imgHeight, unsigned char * img, int &x, int &y, int &w, int &h) { Mat threshold_output; vector<vector<Point> > contours; vector<Vec4i> hierarchy; Mat src_gray; int thresh = 100; int max_thresh = 255; RNG rng(12345); /// Load source image and convert it to gray Mat src(imgHeight, imgWidth, CV_8UC4); int idx; src.data = img; /// Convert image to gray and blur it cvtColor( src, src_gray, CV_BGRA2GRAY ); blur( src_gray, src_gray, Size(10,10) ); /// Detect edges using Threshold threshold( src_gray, threshold_output, thresh, 255, THRESH_BINARY ); /// Find contours findContours( threshold_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) ); /// Approximate contours to polygons + get bounding rects and circles vector<vector<Point> > contours_poly( contours.size() ); vector<Rect> boundRect( contours.size() ); vector<Point2f>center( contours.size() ); vector<float>radius( contours.size() ); int lArea = 0; int lBigger = -1; for( int i = 0; i < contours.size(); i++ ) { approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true ); boundRect[i] = boundingRect( Mat(contours_poly[i]) ); if(lArea < boundRect[i].width * boundRect[i].height) { lArea = boundRect[i].width * boundRect[i].height; lBigger = i; } } if(lBigger > -1) { x = boundRect[lBigger].x; y = boundRect[lBigger].y; w = boundRect[lBigger].width; h = boundRect[lBigger].height; } } } 

On the Delphi side, I pass a pointer to an array of this structure:

 TBGRAPixel = packed record blue, green, red, alpha: byte; end; 

I need to process the bitmap in memory, so I do not load the file from the library.

The question is: is it right to assign a bitmap to cv :: Mat?

I ask about this because the code works without problems on Linux, but does not work on Windows compiled using Mingw.

Note: it does not work with SIGSEGV on this line:

 blur( src_gray, src_gray, Size(10,10) ); 

EDIT: SIGSEGV is only created if I compile OpenCV in Release mode, in Debug mode it works fine.

Thanks in advance, Leonardo.

+6
source share
1 answer

So you create an image this way:

 Mat src(imgHeight, imgWidth, CV_8UC4); int idx; src.data = img; 

The first declaration and instantiation of Mat src (imgHeight, imgWidth, CV_8UC4) will allocate memory for the new image and reference counter, which automatically tracks the number of references to the allocated memory. Then you change the instance variable through

src.data = img;

When the src instance goes out of scope, the destructor is called and most likely tries to free the memory in the src.data that you assigned, and this may cause a segmentation error. The right way to do this is not to change the instance variable of the object, but simply use the right constructor when creating the src instance:

 Mat src(imgHeight, imgWidth, CV_8UC4, img); 

This way, you simply create the matrix header, and no reference count or release will be performed by the src destructor.

Good luck

EDIT: I'm not sure that segfault is actually caused by trying to free up memory incorrectly, but it is good practice not to interrupt the data abstraction by assigning instance variables directly.

+1
source

Source: https://habr.com/ru/post/900815/


All Articles