For my application, I analyzed the spatial resolution of Kinect v2.
To analyze the spatial resolution, I recorded a perpendicular and flat plane at a given distance and converted the depth map of the plane into a point cloud. Then I compare the point with my neighbors, calculating the Euclidean distance.
By calculating the Euclidean distance for this case (1 meter between the plane and the ring), the resolution is close to 3 mm between the points. For an airplane with a distance of 2 meters, I got a resolution of up to 3 mm.
Comparing this with the literature, I think my results are pretty bad.
For example, Yang et al. received an average resolution of 4 mm for an airplane with a distance of 4 meters to kinect ( Evaluating and improving the accuracy of Kinect depth for Windows v2 )
Here is an example of my flat point cloud (2 meters away from my kinect):

Has anyone made any observation regarding the spatial resolution of Kinect v2 or the idea why my solution is bad?
In my opinion, I think something wrong happened when converting my depth image to world coordinates. Therefore, the code is disabled here:
%normalize image points by multiply inverse of K u_n=(u(:)-c_x)/f_x; v_n=(v(:)-c_y)/f_y; % u,v are uv-coordinates of my depth image %calc radial distortion r=sqrt(power(u_n,2)+power(v_n,2)); radial_distortion =1.0 + radial2nd * power(r,2) + radial4nd * power(r,4) + radial6nd * power(r,6); %apply radial distortion to uv-coordinates u_dis=u_n(:).*radial_distortion; v_dis=v_n(:).*radial_distortion; %apply cameramatrix to get undistorted depth point x_depth=u_dis*f_x+c_x; y_depth=v_dis*f_y+c_y; %convert 2D to 3D X=((x_depth(:)-c_x).*d(:))./f_x; Y=((y_depth(:)-c_y).*d(:))./f_y; Z=d; % d is the given depth value at (u,v)
EDIT: So far, I have also tried to include points directly from the coordinate mapper without further calibration steps.
The results related to resolution are still the same. Does anyone compare the results?