Kinect 1.8 colorframe and depthframe are not coordinated

My program has problems with poor coordination between depth and color images.

The player’s mask is not in the same place as the person (see picture below).

void _AllFreamReady(object sender, AllFramesReadyEventArgs e) { using (ColorImageFrame _colorFrame = e.OpenColorImageFrame()) { if (_colorFrame == null) //jezeli pusta ramka nie rob nic { return; } byte[] _pixels = new byte[_colorFrame.PixelDataLength]; //utworzenie tablicy pixeli dla 1 ramki obrazu o rozmiarach przechwyconej ramki z strumienia _colorFrame.CopyPixelDataTo(_pixels); //kopiujemy pixele do tablicy int _stride = _colorFrame.Width * 4; //Kazdy pixel moze miec 4 wartosci Red Green Blue lub pusty image1.Source = BitmapSource.Create(_colorFrame.Width, _colorFrame.Height, 96, 96, PixelFormats.Bgr32, null, _pixels, _stride); if (_closing) { return; } using (DepthImageFrame _depthFrame = e.OpenDepthImageFrame()) { if (_depthFrame == null) { return; } byte[] _pixelsdepth = _GenerateColoredBytes(_depthFrame,_pixels); int _dstride = _depthFrame.Width * 4; image3.Source = BitmapSource.Create(_depthFrame.Width, _depthFrame.Height, 96, 96, PixelFormats.Bgr32, null, _pixelsdepth, _dstride); } } } private byte[] _GenerateColoredBytes(DepthImageFrame _depthFrame, byte[] _pixels) { short[] _rawDepthData = new short[_depthFrame.PixelDataLength]; _depthFrame.CopyPixelDataTo(_rawDepthData); Byte[] _dpixels = new byte[_depthFrame.Height * _depthFrame.Width * 4]; const int _blueindex = 0; const int _greenindex = 1; const int _redindex = 2; for (int _depthindex = 0, _colorindex = 0; _depthindex < _rawDepthData.Length && _colorindex < _dpixels.Length; _depthindex++, _colorindex += 4) { int _player = _rawDepthData[_depthindex] & DepthImageFrame.PlayerIndexBitmaskWidth; if (_player > 0) { _dpixels[_colorindex + _redindex] = _pixels[_colorindex + _redindex]; _dpixels[_colorindex + _greenindex] = _pixels[_colorindex + _greenindex]; _dpixels[_colorindex + _blueindex] = _pixels[_colorindex + _blueindex]; }; } return _dpixels; } 

Program output

+3
source share
2 answers

RGB data and depth are not aligned. This is due to the position of the depth sensor and the RGB camera in the Kinect package: they are different, so you cannot expect image alignment using different points of view.

However, the problem is quite common and was resolved using KinectSensor.MapDepthFrameToColorFrame , which was deprecated after the SDK 1.6. Now you need the CoordinateMapper.MapDepthFrameToColorFrame method.

Coordinate mapping basics - An example of WPF C # shows how to use this method. You can find some important pieces of code in the following:

 // Intermediate storage for the depth data received from the sensor private DepthImagePixel[] depthPixels; // Intermediate storage for the color data received from the camera private byte[] colorPixels; // Intermediate storage for the depth to color mapping private ColorImagePoint[] colorCoordinates; // Inverse scaling factor between color and depth private int colorToDepthDivisor; // Format we will use for the depth stream private const DepthImageFormat DepthFormat = DepthImageFormat.Resolution320x240Fps30; // Format we will use for the color stream private const ColorImageFormat ColorFormat = ColorImageFormat.RgbResolution640x480Fps30; //... // Initialization this.colorCoordinates = new ColorImagePoint[this.sensor.DepthStream.FramePixelDataLength]; this.depthWidth = this.sensor.DepthStream.FrameWidth; this.depthHeight = this.sensor.DepthStream.FrameHeight; int colorWidth = this.sensor.ColorStream.FrameWidth; int colorHeight = this.sensor.ColorStream.FrameHeight; this.colorToDepthDivisor = colorWidth / this.depthWidth; this.sensor.AllFramesReady += this.SensorAllFramesReady; //... private void SensorAllFramesReady(object sender, AllFramesReadyEventArgs e) { // in the middle of shutting down, so nothing to do if (null == this.sensor) { return; } bool depthReceived = false; bool colorReceived = false; using (DepthImageFrame depthFrame = e.OpenDepthImageFrame()) { if (null != depthFrame) { // Copy the pixel data from the image to a temporary array depthFrame.CopyDepthImagePixelDataTo(this.depthPixels); depthReceived = true; } } using (ColorImageFrame colorFrame = e.OpenColorImageFrame()) { if (null != colorFrame) { // Copy the pixel data from the image to a temporary array colorFrame.CopyPixelDataTo(this.colorPixels); colorReceived = true; } } if (true == depthReceived) { this.sensor.CoordinateMapper.MapDepthFrameToColorFrame( DepthFormat, this.depthPixels, ColorFormat, this.colorCoordinates); // ... int depthIndex = x + (y * this.depthWidth); DepthImagePixel depthPixel = this.depthPixels[depthIndex]; // scale color coordinates to depth resolution int X = colorImagePoint.X / this.colorToDepthDivisor; int Y = colorImagePoint.Y / this.colorToDepthDivisor; // depthPixel is the depth for the (X,Y) pixel in the color frame } } 
+1
source

I myself am working on this issue. I agree with VitoShadow that one solution is in coordinate matching, but a section that does not indicate where the relationship is between the errors of the agreed depth and color screen resolutions ( this.colorToDepthDivisor = colorWidth / this.depthWidth; ). This is used with a data shift ( this.playerPixelData[playerPixelIndex - 1] = opaquePixelValue; ) to account for matching skips.

Unfortunately, this can create a border around the masked image where the depth does not stretch to the edge of the color frame. . I try not to use skeleton mapping and optimize my code by tracking deep data with emgu cv to pass the point as the center of the ROI of the color frame. I am still working on it.

0
source

Source: https://habr.com/ru/post/1440887/


All Articles