Skeletonization issues for contour extraction

I found this code to get a skeletal image. I have a circle image ( https://docs.google.com/file/d/0ByS6Z5WRz-h2RXdzVGtXUTlPSGc/edit?usp=sharing ).

img = cv2.imread(nomeimg,0) size = np.size(img) skel = np.zeros(img.shape,np.uint8) ret,img = cv2.threshold(img,127,255,0) element = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3)) done = False while( not done): eroded = cv2.erode(img,element) temp = cv2.dilate(eroded,element) temp = cv2.subtract(img,temp) skel = cv2.bitwise_or(skel,temp) img = eroded.copy() zeros = size - cv2.countNonZero(img) if zeros==size: done = True print("skel") print(skel) cv2.imshow("skel",skel) cv2.waitKey(0) 

The problem is that the result of the image is not a "skeleton", but a lot of dots! My goal was to extract the contour perimeter after I skeletonized the image. How can I change my code to solve it? Is it correct to use cv2.findContours to find the skeleton circle?

+2
source share
2 answers

You need to undo the white and black and fill all the holes by calling cv2.dilate :

 import numpy as np import cv2 img = cv2.imread("e_5.jpg",0) size = np.size(img) skel = np.zeros(img.shape,np.uint8) ret,img = cv2.threshold(img,127,255,0) element = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3)) img = 255 - img img = cv2.dilate(img, element, iterations=3) done = False while( not done): eroded = cv2.erode(img,element) temp = cv2.dilate(eroded,element) temp = cv2.subtract(img,temp) skel = cv2.bitwise_or(skel,temp) img = eroded.copy() zeros = size - cv2.countNonZero(img) if zeros==size: done = True 

Here is the result:

enter image description here

But the result is not good, because there are many gaps. The following algorithm is better; it uses functions in scipy.ndimage.morphology :

 import scipy.ndimage.morphology as m import numpy as np import cv2 def skeletonize(img): h1 = np.array([[0, 0, 0],[0, 1, 0],[1, 1, 1]]) m1 = np.array([[1, 1, 1],[0, 0, 0],[0, 0, 0]]) h2 = np.array([[0, 0, 0],[1, 1, 0],[0, 1, 0]]) m2 = np.array([[0, 1, 1],[0, 0, 1],[0, 0, 0]]) hit_list = [] miss_list = [] for k in range(4): hit_list.append(np.rot90(h1, k)) hit_list.append(np.rot90(h2, k)) miss_list.append(np.rot90(m1, k)) miss_list.append(np.rot90(m2, k)) img = img.copy() while True: last = img for hit, miss in zip(hit_list, miss_list): hm = m.binary_hit_or_miss(img, hit, miss) img = np.logical_and(img, np.logical_not(hm)) if np.all(img == last): break return img img = cv2.imread("e_5.jpg",0) ret,img = cv2.threshold(img,127,255,0) element = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3)) img = 255 - img img = cv2.dilate(img, element, iterations=3) skel = skeletonize(img) imshow(skel, cmap="gray", interpolation="nearest") 

Result:

enter image description here

+5
source

Your skeletonization algorithm calculates the skeleton of the white area:

  • Erode: sets the β€œpixel test” to the minimum of all pixels within the outline element, black <white
  • Dilate: opposite erode, sets the "pixel test" to the maximum value of all pixels inside the structuring element, white> black

To fix your code, you can change the parameters for your threshold function:

 ret,img = cv2.threshold(img,240,255,1) 

Parameters are described here .

0
source

Source: https://habr.com/ru/post/969343/


All Articles