How to check the intersection of a ray with an object in ARCore

Is there any way to check if I touched an object on the screen? As far as I understand, the HitResult class allows me to check if I touched a recognized and displayed surface. But I want to check this, I touched the object that is installed on this surface.

+2
source share
2 answers

ARCore really has no concept of an object, so we cannot provide this directly. I suggest taking a look at ray-sphere tests for the starting point.

However, I can help with getting the beam itself (added to HelloArActivity ):

 /** * Returns a world coordinate frame ray for a screen point. The ray is * defined using a 6-element float array containing the head location * followed by a normalized direction vector. */ float[] screenPointToWorldRay(float xPx, float yPx, Frame frame) { float[] points = new float[12]; // {clip query, camera query, camera origin} // Set up the clip-space coordinates of our query point // +x is right: points[0] = 2.0f * xPx / mSurfaceView.getMeasuredWidth() - 1.0f; // +y is up (android UI Y is down): points[1] = 1.0f - 2.0f * yPx / mSurfaceView.getMeasuredHeight(); points[2] = 1.0f; // +z is forwards (remember clip, not camera) points[3] = 1.0f; // w (homogenous coordinates) float[] matrices = new float[32]; // {proj, inverse proj} // If you'll be calling this several times per frame factor out // the next two lines to run when Frame.isDisplayRotationChanged(). mSession.getProjectionMatrix(matrices, 0, 1.0f, 100.0f); Matrix.invertM(matrices, 16, matrices, 0); // Transform clip-space point to camera-space. Matrix.multiplyMV(points, 4, matrices, 16, points, 0); // points[4,5,6] is now a camera-space vector. Transform to world space to get a point // along the ray. float[] out = new float[6]; frame.getPose().transformPoint(points, 4, out, 3); // use points[8,9,10] as a zero vector to get the ray head position in world space. frame.getPose().transformPoint(points, 8, out, 0); // normalize the direction vector: float dx = out[3] - out[0]; float dy = out[4] - out[1]; float dz = out[5] - out[2]; float scale = 1.0f / (float) Math.sqrt(dx*dx + dy*dy + dz*dz); out[3] = dx * scale; out[4] = dy * scale; out[5] = dz * scale; return out; } 

If you call this several times per frame, see the comment on the calls to getProjectionMatrix and invertM .

+4
source

In addition to Mouse Selection using Ray Casting , cf. Answer Yang, another widely used method is a collection buffer, explained in detail (with C ++ code) here

The 3D assembly trick is very simple. We attach a pointer to each triangle, and for FS, the index of the triangle to which the pixel belongs is displayed. The end result is that we get a โ€œcolorโ€ buffer that does not actually contain colors. Instead, for each pixel that is covered by some primitive, we get the index of this primitive. When we click on the window, we will read back this index (depending on the location of the mouse) and select the red triangle. By combining the depth buffer in the process, we guarantee that when several primitives overlap the same pixel, we get the index of the most primitive element (closest to the camera).

So, in a nutshell:

  • For each method of drawing objects, a constant index and a boolean value are required for whether this drawing displays a pixel buffer or not.
  • The rendering method converts the index to grayscale and the scene is displayed
  • After all the rendering is done, extract the pixel color at the touch position GL11.glReadPixels(x, y, /*the x and y of the pixel you want the colour of*/) . Then translate the color back to the index and the index back to the object. Voilร , you have your object with a click.

To be fair, for a mobile usecase, you probably should read a 10x10 rectangle, cut it and select the first non-phonon found, because touches are never accurate.

This approach works regardless of the complexity of your objects.

Click sample buffer

+1
source

Source: https://habr.com/ru/post/1272362/


All Articles