How to identify markers for Watershed in OpenCV?

I am writing for Android with OpenCV. I split the image similar to the bottom using a marker-controlled watershed, without the user manually marking the image. I plan to use regional highs as markers.

minMaxLoc() would give me a value, but how can I limit it to the blocks that interest me? Can I use the results from findContours() or cvBlob blobs to limit the ROI and apply the maximum values โ€‹โ€‹to each block?

input image

+49
image-processing opencv computer-vision image-segmentation watershed
Jul 02 2018-12-12T00:
source share
2 answers

First of all: the minMaxLoc function finds only the global minimum and global maximum for a given input, so in most cases it is useless to determine regional minimums and / or regional maximums. But your idea is correct, extracting markers based on regional minima / maxima to perform a Watershed Transform based on markers is completely fine. Let me clarify what a Watershed Transformation is and how you should use the implementation present in OpenCV correctly.

Some decent amount of documents that concern the watershed describes it in the same way as it should (I can skip some details if you are not sure: ask). Consider the surface of a region that, as you know, contains valleys and peaks (among other details that are irrelevant for us here). Suppose that below this surface you have water, colored water. Now make holes in each valley of your surface, and then water will begin to fill the entire area. At some point, multi-colored waters will meet, and when this happens, you will build a dam so that they do not touch each other. As a result, you have a collection of dams, which is a watershed that separates all the colored water.

Now, if you make too many holes on this surface, you will get too many areas: excessive segmentation. If you do too little, you get insufficient segmentation. Thus, almost any paper that suggests using a watershed is a technique to avoid these problems for the application that it is dealing with.

I wrote all this (which is perhaps too naive for those who know what the watershed Transformation is), because it directly reflects how you should use the watershed implementations (which the current accepted answer does completely wrong). Let's start with the OpenCV example using Python bindings.

The image presented in the question consists of many objects that are basically too close and in some cases overlap. The usefulness of the watershed here is to properly separate these objects, and not to group them into one component. Therefore, you need at least one marker for each object and good markers for the background. As an example, first align the Otsu input image and perform a morphological discovery to remove small objects. The result of this step is shown below in the left image. Now that the binary image has considered applying distance conversion to it, go back to the right.

enter image description hereenter image description here

With the result of the distance conversion, we can consider a certain threshold such that we consider only the areas farthest from the background (left image below). By doing so, we can get a marker for each object, marking different areas after an earlier threshold. Now we can look at the border of the extended version of the left image above to compose our marker. The full marker is shown below on the right (some markers are too dark to be visible, but each white area in the left image is shown in the right image).

enter image description hereenter image description here

This marker, which we have here, makes a lot of sense. Each colored water == one marker will begin to fill the region, and the conversion of the watersheds will build dams to prevent the merging of different โ€œcolorsโ€. If we do the conversion, we get the image on the left. Given only dams, compiling them with the original image, we get the result on the right.

enter image description hereenter image description here

 import sys import cv2 import numpy from scipy.ndimage import label def segment_on_dt(a, img): border = cv2.dilate(img, None, iterations=5) border = border - cv2.erode(border, None) dt = cv2.distanceTransform(img, 2, 3) dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8) _, dt = cv2.threshold(dt, 180, 255, cv2.THRESH_BINARY) lbl, ncc = label(dt) lbl = lbl * (255/ncc) # Completing the markers now. lbl[border == 255] = 255 lbl = lbl.astype(numpy.int32) cv2.watershed(a, lbl) lbl[lbl == -1] = 0 lbl = lbl.astype(numpy.uint8) return 255 - lbl img = cv2.imread(sys.argv[1]) # Pre-processing. img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _, img_bin = cv2.threshold(img_gray, 0, 255, cv2.THRESH_OTSU) img_bin = cv2.morphologyEx(img_bin, cv2.MORPH_OPEN, numpy.ones((3, 3), dtype=int)) result = segment_on_dt(img, img_bin) cv2.imwrite(sys.argv[2], result) result[result != 255] = 0 result = cv2.dilate(result, None) img[result == 255] = (0, 0, 255) cv2.imwrite(sys.argv[3], img) 
+86
Jan 31 '13 at 2:15
source share

I would like to explain a simple code on how to use the watershed here. I use OpenCV-Python, but I hope you can easily understand.

In this code, I use the watershed as a tool to highlight the background. (This example is a copy of the python C ++ code in the OpenCV cookbook). This is a simple case to understand the watershed. In addition, you can use the watershed to count the number of objects in this image. This will be a slightly extended version of this code.

1 . First, we download the image, convert it to shades of gray and set it to the appropriate value. I took Otsu binarization , so it will find the best threshold value.

 import cv2 import numpy as np img = cv2.imread('sofwatershed.jpg') gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) ret,thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) 

Below is the result:

enter image description here

(even this result is good because there is a lot of contrast between the front and background images)

2 - Now we need to create a marker. A marker is an image with the same size as the original image, which is 32SC1 (32-bit signed one channel).

Now in the original image there will be some areas where you are just sure that the part belongs to the front. Mark this area 255 in the marker image. Now the area in which you are sure to be the background is marked with the icon 128. The region in which you are not sure is marked with the sign 0. This we will do next.

A - Foreground area : - We already have a threshold image where the pills are white. We destroy them a bit, so we are sure that the remaining area belongs to the foreground area.

 fg = cv2.erode(thresh,None,iterations = 2) 

fg :

enter image description here

B - Background area : - Here we expand the threshold image so that the background area decreases. But we are sure that the remaining black area is 100% background. We set it to 128.

 bgt = cv2.dilate(thresh,None,iterations = 3) ret,bg = cv2.threshold(bgt,1,128,1) 

Now we get bg as follows:

enter image description here

C - Now add both fg and bg :

 marker = cv2.add(fg,bg) 

Below we get:

enter image description here

Now we can clearly understand from the above image that the white region is 100% ahead, the gray region is 100% background, and the black region is not sure.

Then convert it to 32SC1:

 marker32 = np.int32(marker) 

3 - Finally, apply the watershed and convert the result back to a uint8 image:

 cv2.watershed(img,marker32) m = cv2.convertScaleAbs(marker32) 

m:

enter image description here

4 - We spawn it correctly to get the mask and execute bitwise_and with the input image:

 ret,thresh = cv2.threshold(m,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) res = cv2.bitwise_and(img,img,mask = thresh) 

res:

enter image description here

Hope this helps !!!

THE ARK

+44
Jul 11 '12 at 17:30
source share



All Articles