I currently have a stereo camera installed. I calibrated both cameras and had an integrated matrix for both cameras K1 and K2 .
K1 = [2297.311, 0, 319.498; 0, 2297.313, 239.499; 0, 0, 1]; K2 = [2297.304, 0, 319.508; 0, 2297.301, 239.514; 0, 0, 1];
I also defined the F fundamental matrix between the two cameras using findFundamentalMat() from OpenCV. I checked the Epipolar constraint using a pair of matching dots x1 and x2 (in pixel coordinates) and it is very close to 0 .
F = [5.672563368940768e-10, 6.265600996978877e-06, -0.00150188302445251; 6.766518121363063e-06, 4.758206104804563e-08, 0.05516598334827842; -0.001627120880791009, -0.05934224611334332, 1]; x1 = 133,75 x2 = 124.661,67.6607 transpose(x2)*F*x1 = -0.0020
From F I can get Essential Matrix E as E = K2'*F*K1 . I decompose E using the MATLAB SVD function to get 4 possibilities of rotation and translation of K2 with respect to K1 .
E = transpose(K2)*F*K1; svd(E); [U,S,V] = svd(E); diag_110 = [1 0 0; 0 1 0; 0 0 0]; newE = U*diag_110*transpose(V); [U,S,V] = svd(newE); //Perform second decompose to get S=diag(1,1,0) W = [0 -1 0; 1 0 0; 0 0 1]; R1 = U*W*transpose(V); R2 = U*transpose(W)*transpose(V); t1 = U(:,3); //norm = 1 t2 = -U(:,3); //norm = 1
Let's say that K1 used as a frame of coordinates for which we take all measurements. Therefore, the center K1 is located at the point C1 = (0,0,0) . In this case, it should be possible to apply the correct rotation R and the translation t so that C2 = R*(0,0,0)+t (i.e., the center K2 was measured relative to the center K1 )
Now let's say that using my corresponding pairs x1 and x2 . If I know the length of each pixel in both of my cameras, and since I know the focal length from the inner matrix, I would have to define two vectors v1 and v2 for both cameras that intersect at the same point as below.
pixel_length = 7.4e-6; //in meters focal_length = 17e-3; //in meters dx1 = (133-319.5)*pixel_length; //x-distance from principal point of 640*480 image dy1 = (75-239.5) *pixel_length; //y-distance from principal point of 640*480 image v1 = [dx1 dy1 focal_length] - (0,0,0); //vector found using camera center and corresponding image point on the image plane dx2 = (124.661-319.5)*pixel_length; //same idea dy2 = (67.6607-239.5)*pixel_length; //same idea v2 = R * ( [dx2 dy2 focal_length] - (0,0,0) ) + t; //apply R and t to measure v2 with respect to K1 frame
Using this vector and knowing the linear equation in parametric form, we can then equate two lines to triangulation and solve two scalar quantities s and t through the left division function in MATLAB to solve the system of equations.
C1 + s*v1 = C2 + t*v2 C1-C2 = tranpose([v2 v1])*transpose([st]) //solve Ax = B form system to find s and t
In determining s and t we can find a triangulated point by including it back in the linear equation. However, my process was not successful, because I can not find a single solution R and t , in which the point is in front of both cameras and where both cameras are directed forward.
Is there something wrong with my conveyor or thought process? Is it possible to get every single pixel ray?