Kinect RGB-D video compression

I need to send video from a Kinect camera over a network. I take a video from the following two sources of Kinect:

  • Two- color video ( RGB ). 32 bits per pixel. 640x480 at 30 frames per second.
  • Depth data ( D ). 16 bits per pixel, representing the distance to the nearest object in mm. 640x480 at 30 frames per second.

This amounts to a bandwidth of at least about 53 MB / s. That is why I need to encode (compress) both video sources at the origin, and then decode to the target. RGB-D data will be processed by the object tracking algorithm in the target.

So far, I have found many articles on algorithms to achieve this, for example, such as: RGB and the depth of intra-frame cross-compression for low-bandwidth 3D video

The problem is that the algorithms described in such documents do not have public access implementations. I know, I could implement them myself, but they use many other complex image processing algorithms. I do not have enough knowledge about (edge ​​detection, contour characterization ...).

I also found some C ++ libraries based on the use of a discrete median filter, delta (excluding sending redundant data) and LZ4 compression: http://thebytekitchen.com/2014/03/24/data-compression-for-the- kinect /

: / RGB-D Kinect?

PS: ++.

+4

Source: https://habr.com/ru/post/1620094/


All Articles