Opencv euclidean clustering vs findContours

I have the following image mask:

mask

I want to apply something similar to cv::findContours , but this algorithm only connects connected points in the same groups. I want to do this with some tolerance, i.e. I want to add pixels next to each other within the allowable radius: this is like hierarchical clustering with Euclidean distance.

Is it implemented in OpenCV? Or is there a quick approach to implement this?

What I want looks like this

http://www.pointclouds.org/documentation/tutorials/cluster_extraction.php

Applies to the white pixels of this mask.

Thanks.

+5
source share
2 answers

You can use partition for this:

partition splits a set of elements into equivalence classes. You can define your equivalence class as all points within a given Euclidean distance (radius tolerance)

If you have C ++ 11, you can simply use the lambda function:

 int th_distance = 18; // radius tolerance int th2 = th_distance * th_distance; // squared radius tolerance vector<int> labels; int n_labels = partition(pts, labels, [th2](const Point& lhs, const Point& rhs) { return ((lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y)) < th2; }); 

Otherwise, you can just build a functor (see details in the code below).

With the appropriate distance radius (I found that 18 works well in this image), I got:

enter image description here

Full code:

 #include <opencv2\opencv.hpp> #include <vector> #include <algorithm> using namespace std; using namespace cv; struct EuclideanDistanceFunctor { int _dist2; EuclideanDistanceFunctor(int dist) : _dist2(dist*dist) {} bool operator()(const Point& lhs, const Point& rhs) const { return ((lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y)) < _dist2; } }; int main() { // Load the image (grayscale) Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE); // Get all non black points vector<Point> pts; findNonZero(img, pts); // Define the radius tolerance int th_distance = 18; // radius tolerance // Apply partition // All pixels within the radius tolerance distance will belong to the same class (same label) vector<int> labels; // With functor //int n_labels = partition(pts, labels, EuclideanDistanceFunctor(th_distance)); // With lambda function (require C++11) int th2 = th_distance * th_distance; int n_labels = partition(pts, labels, [th2](const Point& lhs, const Point& rhs) { return ((lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y)) < th2; }); // You can save all points in the same class in a vector (one for each class), just like findContours vector<vector<Point>> contours(n_labels); for (int i = 0; i < pts.size(); ++i) { contours[labels[i]].push_back(pts[i]); } // Draw results // Build a vector of random color, one for each class (label) vector<Vec3b> colors; for (int i = 0; i < n_labels; ++i) { colors.push_back(Vec3b(rand() & 255, rand() & 255, rand() & 255)); } // Draw the labels Mat3b lbl(img.rows, img.cols, Vec3b(0, 0, 0)); for (int i = 0; i < pts.size(); ++i) { lbl(pts[i]) = colors[labels[i]]; } imshow("Labels", lbl); waitKey(); return 0; } 
+10
source

I suggest using the DBSCAN algorithm. This is exactly what you are looking for. Use a simple Euclidean distance or even a Manhattan distance may work better. Input - all white dots (threshold value). The output is a group of dots (your connected component)

Here is DBSCAN C ++ implenetation

EDIT: I tried DBSCAN myself and here is the result: enter image description here

As you can see, only truly connected points are considered as one cluster.

This result was obtained using the standard DBSCAN algorithm with EPS = 3 (static does not need to be configured) MinPoints = 1 (static) and Manhattan Distance

+2
source

Source: https://habr.com/ru/post/1236420/


All Articles