I have 4 ps3eye cameras. And I calibrated cameras1 and camera2 using the cvStereoCalibrate () function of the OpenCV library using a checkerboard pattern, finding the angles and passing their 3d coordinates to this function.
I also calibrated cameras2 and camera3 using a different set of chessboard images viewed by camera2 and camera3.
Using the same method, I calibrated camera3 and camera4.
So, now I have the external and internal parameters of camera1 and camera2, the external and internal parameters of camera2 and camera3, and the external and internal parameters of camera3 and camera4.
where the external parameters are the rotation and translation matrices, and the internal are the length and main focus point matrices.
Now suppose there is a three-dimensional point (world coordinate) (And I know how to find the 3d coordinates from stereo cameras) that are viewed by camera3 and camera4, which is not visible by cameras1 and camera2.
The question I have is: How do you take this three-dimensional coordinate point of the world that is viewed by camera3 and camera4 and transform it with respect to camera1 and camera2 of the world coordinate system using the rotation, translation, focus and principle point parameters?
source share