Track eyeball position with webcam, OpenCV and Python

I am trying to create a robot that I can control with basic eye movements. I show the webcam on my face, and, depending on the position of my student, the robot will move in a certain way. If the pupil is in the upper, lower, left corner, in the right corner of the eye, the robot will move forward, backward, left, right, respectively.

My initial plan was to use the cascade for the eyes to find the left eye. Then I used a houghcircle on the eye area to find the center of the pupil. I would determine where the student was in the eye, finding the distance from the center of the houghcircle to the borders of the common eye area.

So, for the first part of my code, I hope to be able to track the center of the pupil, as can be seen from this video. https://youtu.be/aGmGyFLQAFM?t=38

But when I run my code, it cannot consistently find the center of the student. Houghcircle is often painted in the wrong place. How can I make my program consistently find the center of the pupil, even when the eye moves?

Is it possible / better / easier for me to tell about my program, where is the student at the beginning? I looked at some other eye tracking methods, but I cannot make up a general algorithm. If someone can help shape one, that would be much appreciated! https://arxiv.org/ftp/arxiv/papers/1202/1202.6517.pdf

import numpy as np import cv2 face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') eye_cascade = cv2.CascadeClassifier('haarcascade_righteye_2splits.xml') #number signifies camera cap = cv2.VideoCapture(0) while 1: ret, img = cap.read() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #faces = face_cascade.detectMultiScale(gray, 1.3, 5) eyes = eye_cascade.detectMultiScale(gray) for (ex,ey,ew,eh) in eyes: cv2.rectangle(img,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) roi_gray2 = gray[ey:ey+eh, ex:ex+ew] roi_color2 = img[ey:ey+eh, ex:ex+ew] circles = cv2.HoughCircles(roi_gray2,cv2.HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0) try: for i in circles[0,:]: # draw the outer circle cv2.circle(roi_color2,(i[0],i[1]),i[2],(255,255,255),2) print("drawing circle") # draw the center of the circle cv2.circle(roi_color2,(i[0],i[1]),2,(255,255,255),3) except Exception as e: print e cv2.imshow('img',img) k = cv2.waitKey(30) & 0xff if k == 27: break cap.release() cv2.destroyAllWindows() 
+5
source share
1 answer

I see two alternatives, from some work that I did before:

  • Train the Haar detector for eyeball detection using training images with the center of the pupil in the center and the width of the eyeball. I found this better than using Hough circles or just the original OpenCV eye detector (the one used in your code).

  • Use Dlib control points to evaluate the eye area. Then use the contrast caused by the white and dark areas of the eyeball along with the contours to evaluate the center of the pupil. This yielded much better results.

+6
source

Source: https://habr.com/ru/post/1271046/


All Articles