I use opencv2.4.6 and kinect sdk to calibrate several kinas. After I get the image data from the cinecations, I turn them into opencv images and follow the instructions, for example. RGBDemo and use the following pipeline:
//find the corners cv::findChessboardCorners(*image, patternSize, corners, CV_CALIB_CB_NORMALIZE_IMAGE|CV_CALIB_CB_ADAPTIVE_THRESH); cvtColor(*image, gray_image, CV_BGR2GRAY); cornerSubPix(gray_image, corners, cv::Size(5,5), cv::Size(-1,-1), cvTermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 )); //after I collect 20 sets of corners, do calibration CalcCameraIntrinsic(corner_src, rgb_intr_src, coeff_src); CalcCameraIntrinsic(corner_dist, rgb_intr_dist, coeff_dist); cv::stereoCalibrate(patternPoints, corner_src, corner_dist, rgb_intr_src, coeff_src, rgb_intr_dist, coeff_dist, cv::Size(width, height), R, T, E, F, cv::TermCriteria(cv::TermCriteria::COUNT+cv::TermCriteria::EPS, 50, 1e-6), cv::CALIB_FIX_INTRINSIC);
I think that my angular positions are correct because I use drawChessboardCorners and do not detect errors. After all these steps, I get a rotation matrix and a translation vector. When I do this conversion to the point clouds that I get from the kinks, I find that they are not aligned.
I do not know why. I do not think about the order of the images. No matter which point cloud I apply the transformation to, I cannot get the right alignment. The only reason I think is the parameters of the opencv function.
Thank you for attention!
8-20 Edit: Although no one answers me, I get one possible reason: The point cloud is based on a pixel, and the matrix obtained from opencv is based on meters. I changed the point cloud to meters, but that is not good either. I found that the matrix I received was probably right. Therefore, I doubt that maybe something is wrong with my display function. I will send a conclusion if I find an answer.
8-21 Edit: I found out the reason. I made a mistake in the difference between opencv and opengl. Now the matrix can align two point clouds, but not very perfect.