Check out CameraIntrinsics.
typedef struct _CameraIntrinsics { float FocalLengthX; float FocalLengthY; float PrincipalPointX; float PrincipalPointY; float RadialDistortionSecondOrder; float RadialDistortionFourthOrder; float RadialDistortionSixthOrder; } CameraIntrinsics;
You can get it from ICoordinateMapper::GetDepthCameraIntrinsics .
Then, for each pixel (u,v,d) in the depth space, you can get the coordinate in world space by doing the following:
x = (u - principalPointX) / focalLengthX * d; y = (v - principalPointY) / focalLengthY * d; z = d;
For a color space pixel, you first need to find the corresponding depth space pixel, which you should use ICoordinateMapper::MapCameraPointTodepthSpace . Since not every color pixel has an associated pixel of depth (1920x1080 versus 512x424), you cannot have the color dot region of full HD.
source share