Access pixel values ​​of pixels in a contour border using OpenCV in Python

I am using OpenCV 3.0.0 on Python 2.7.9. I am trying to track an object in a video with a fixed background and evaluate some of its properties. Since there can be several moving objects in the image, I want to be able to distinguish them and track them individually in all other frames of the video.

One way, I thought I could do this by converting the image to binary, getting blobs (a tracked object, in this case) and getting the coordinates of the border of the object. Then I can go to these boundary coordinates in the image in grayscale, get the intensities of the pixels surrounded by this border, and track the intensity of the color gradient / pixel in other frames. Thus, I could isolate two objects from each other, so they will not be considered as new objects in the next frame.

I have the boundary coordinates of the contour, but I do not know how to get the intensity of the pixels inside this border. Can someone help me with this?

Thanks!

+10
source share
3 answers

Coming with our comments, what you can do is create a list of numpyarrays, where each element is an intensity that describe the inside of the contour of each object. In particular, for each path, create a binary mask that fills the inside of the path, find the coordinates of the (x,y)filled object, then index your image and capture the intensities.

I don’t know exactly how you configured your code, but let's assume that you have a grayscale image with a name img. You may need to convert the image to grayscale, as it cv2.findContoursworks with grayscale images. In this case, usually call cv2.findContours:

import cv2
import numpy as np

#... Put your other code here....
#....

# Call if necessary
#img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Call cv2.findContours
contours,_ = cv2.findContours(img, cv2.RETR_LIST, cv2.cv.CV_CHAIN_APPROX_NONE)

contours 3D numpy , N x 1 x 2, N .

, :

# Initialize empty list
lst_intensities = []

# For each list of contour points...
for i in range(len(contours)):
    # Create a mask image that contains the contour filled in
    cimg = np.zeros_like(img)
    cv2.drawContours(cimg, contours, i, color=255, thickness=-1)

    # Access the image pixels and create a 1D numpy array then add to list
    pts = np.where(cimg == 255)
    lst_intensities.append(img[pts[0], pts[1]])

, . , , thickness -1. 255. numpy.where , . , 255. , , .

lst_intensities 1D numpy , , . , lst_intensities[i] i - , .

+18

@rayryeng !

: np.where() , . , pts[0] row indices, , pts[1] column indices, . img.shape (rows, cols, channels). , img[pts[0], pts[1]] ndarray img.

+3

, , , .

Actually, there is a slight improvement in good code: we can skip the line where we got the glasses, because both the grayscale image and the np.zeros temporary image have the same shape, we could use this “where” inside the bracket directly. Something like that:

# (...) opening image, converting into grayscale, detect contours (...)
intensityPer = 0.15
for c in contours:
    temp = np.zeros_like(grayImg)
    cv2.drawContours(temp, [c], 0, (255,255,255), -1)
    if np.mean(grayImg[temp==255]) > intensityPer*255:
        pass # here your code

Using this sample, we guarantee that the average intensity of the region inside the circuit will be at least 15% of the maximum intensity.

0
source

Source: https://habr.com/ru/post/1612400/


All Articles