How can I use the built-in Kinect driver for Linux?

In the latest Linux kernel, it supports Kinect through the driver . I want to access the RGB and D streams (depth) and put them in a 2D array, either 64-bit ints, or two separate arrays will work. C # is preferred, C ++ is acceptable.

So my question is: where can I find additional information about this, such as articles and documentation? What does a simple example program look like, for example, printing color and depth at 100x100?

I will vote for any good links and accept the first working code example.

Thanks Frankie

Ps, I know OpenKinect, NITE, Microsoft SDK projects, etc. I want this to be easy to install on other Linux computers and distributions, so a generic kernel driver is preferred. My main use will be a webcam that replaces pixels farther than depth X and saves to disk.

Update

Since I asked, I have not received much more. I found this article . I checked the Git repository, which does not seem to have been updated since April, and I do not see any connection to the Linux kernel or its inclusion. There is no mention of Kinect on any later blogs other than this unrelated one .

Update 2

I cannot find who applied the Kinect driver to the kernel. GitHub has a kernel mirror . I tried using Google to search for it, but this query and options did not change anything. Then I tried looking for GitHub without positive hits. Does anyone have any info?

+4
source share
2 answers

Unfortunately, the driver does not support the depth stream, but only the raw image from the monochrome sensor. Therefore, it is impossible to use only the kernel driver. See Also blog post I wrote on this topic. If you remove the built-in kernel modules, you can do this with libfreenect.

You can find the driver file here in GitHub: kinect.c .

+2
source

The driver does not support stream D at the specified link:

[media] gspca - kinect: New subdirectory for Microsoft Kinect

The Kinect Sensor is a device used by Microsoft for the Kinect project, which is a system for an uncontrolled human-human-computer interaction designed for the Xbox 360.

In the Kinect device, RGBD data is captured from two different sensors: a regular RGB sensor and a monochrome sensor that uses IR structured light to capture what is finally displayed as a depth map; therefore we have mainly a 3D scanner with structured light.

The Kinect gspca sub-director simply supports the video stream by setting the output from the RGB sensor or the raw output from the monochrome sensor; it does not deal with the processed depth of the stream, but it allows you to use the sensor as a web camera or as an infrared camera (this may require an external infrared light source).

The low-level implementation is based on the OpenKinect project code (http://openkinect.org).

From the driver source in which it appears is Antonio Ospity, available at ospite@studenti.unina.it

As already stated in the comments, the author should be able to answer all your questions, since what you want really depends on what exactly is displayed by the driver (which may even depend on the version).

+2
source

Source: https://habr.com/ru/post/1385592/


All Articles