This issue is the next step from this link .
In short, I work with depth images with kinect, which extract images from 16 bits. With C ++ Amp, we have some restrictions on the size of data bits. So, I'm trying to use textures to handle this.
Now, I'm sure I'm writing correctly. However, it seems that there are some problems extracted from the original texture data.
What code:
typedef concurrency::graphics::texture<unsigned int, 2> TextureData;
typedef concurrency::graphics::texture_view<unsigned int, 2> Texture;
cv::Mat image(480, 640, CV_16UC1);
cv::Mat image2(480, 640, CV_16UC1);
for (int i = 0; i < 480; i++)
{
for (int j = 0; j < 640; j++)
{
int gradientInX = (j / 640.f) * 65535;
image.at<uint16_t>(i, j) = gradientInX;
image2.at<uint16_t>(i, j) = gradientInX;
}
}
cv::imshow("image", image);
cv::waitKey(50);
concurrency::extent<2> imageSize(480, 640);
int bits = 16;
const unsigned int nBytes = imageSize.size() * 2;
{
uchar* data = image.data;
TextureData texDataS(imageSize, data, nBytes, bits);
Texture texS(texDataS);
TextureData texDataD(imageSize, bits);
Texture texR(texDataD);
parallel_for_each(
imageSize,
[=, &texDataS](concurrency::index<2> idx) restrict(amp)
{
int val = texDataS(idx);
texR.set(idx, val);
});
concurrency::graphics::copy_async(texR, image2.data, imageSize.size() *(bits / 8u) );
cv::imshow("result", image2);
cv::waitKey(50);
}
And the results:

And after copying using gpu:

I also tried using the kinect image to find out what would happen. The result surprises me:
Original:

Result:

Does anyone know what is going on?
, 16 (, )
cv::Mat image = cv::imread("Depth247.tiff", CV_LOAD_IMAGE_ANYDEPTH);
cv::Mat image2(480, 640, CV_16UC1);
, . , , .
,