I am trying to create a robot that I can control with basic eye movements. I show the webcam on my face, and, depending on the position of my student, the robot will move in a certain way. If the pupil is in the upper, lower, left corner, in the right corner of the eye, the robot will move forward, backward, left, right, respectively.
My initial plan was to use the cascade for the eyes to find the left eye. Then I used a houghcircle on the eye area to find the center of the pupil. I would determine where the student was in the eye, finding the distance from the center of the houghcircle to the borders of the common eye area.
So, for the first part of my code, I hope to be able to track the center of the pupil, as can be seen from this video. https://youtu.be/aGmGyFLQAFM?t=38
But when I run my code, it cannot consistently find the center of the student. Houghcircle is often painted in the wrong place. How can I make my program consistently find the center of the pupil, even when the eye moves?
Is it possible / better / easier for me to tell about my program, where is the student at the beginning? I looked at some other eye tracking methods, but I cannot make up a general algorithm. If someone can help shape one, that would be much appreciated! https://arxiv.org/ftp/arxiv/papers/1202/1202.6517.pdf
import numpy as np import cv2 face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') eye_cascade = cv2.CascadeClassifier('haarcascade_righteye_2splits.xml')
source share