Get quaternion with Android gyro?

The official development documentation offers the following method for obtaining a quaternion from a 3D rotation velocity vector (wx, wy, wz) .

 // Create a constant to convert nanoseconds to seconds. private static final float NS2S = 1.0f / 1000000000.0f; private final float[] deltaRotationVector = new float[4](); private float timestamp; public void onSensorChanged(SensorEvent event) { // This timestep delta rotation to be multiplied by the current rotation // after computing it from the gyro sample data. if (timestamp != 0) { final float dT = (event.timestamp - timestamp) * NS2S; // Axis of the rotation sample, not normalized yet. float axisX = event.values[0]; float axisY = event.values[1]; float axisZ = event.values[2]; // Calculate the angular speed of the sample float omegaMagnitude = sqrt(axisX*axisX + axisY*axisY + axisZ*axisZ); // Normalize the rotation vector if it big enough to get the axis // (that is, EPSILON should represent your maximum allowable margin of error) if (omegaMagnitude > EPSILON) { axisX /= omegaMagnitude; axisY /= omegaMagnitude; axisZ /= omegaMagnitude; } // Integrate around this axis with the angular speed by the timestep // in order to get a delta rotation from this sample over the timestep // We will convert this axis-angle representation of the delta rotation // into a quaternion before turning it into the rotation matrix. float thetaOverTwo = omegaMagnitude * dT / 2.0f; float sinThetaOverTwo = sin(thetaOverTwo); float cosThetaOverTwo = cos(thetaOverTwo); deltaRotationVector[0] = sinThetaOverTwo * axisX; deltaRotationVector[1] = sinThetaOverTwo * axisY; deltaRotationVector[2] = sinThetaOverTwo * axisZ; deltaRotationVector[3] = cosThetaOverTwo; } timestamp = event.timestamp; float[] deltaRotationMatrix = new float[9]; SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector); // User code should concatenate the delta rotation we computed with the current rotation // in order to get the updated rotation. // rotationCurrent = rotationCurrent * deltaRotationMatrix; } } 

My question is:

This is different than the case of acceleration , where the calculation of the resulting acceleration using ALONG accelerations makes sense 3 axes.

I am really confused why the resulting rotation speed can also be calculated with the AROUND sub-rotation coefficients of the three axes. It makes no sense to me.

Why does this method - the search for the magnitude of the composite rotation speed - even work?

+4
source share
2 answers

Since your title does not match your questions, I try to answer as many as possible.

Gyroscopes do not give absolute orientation (like ROTATION_VECTOR ), but only rotational speeds around this axis are built so as to "rotate" around. This is due to the design and construction of the gyroscope . Imagine the design below. The golden thing rotates, and thanks to the laws of physics, it does not want to change its rotation. Now you can rotate the frame and measure these rotations.

Illustration of a gyroscope

Now, if you want to get something like the β€œcurrent rotational state” from the gyroscope, you will have to start with the initial rotation, call it q0 and constantly add those tiny small rotational differences that the gyroscope measures around the axis to it: q1 = q0 + gyro0 , q2 = q1 + gyro1 , ...

In other words: The gyroscope gives you the difference that it rotated around the three constructed axes, so you are not making up absolute values, but a small delta.

Now this is very general and leaves a couple of questions unanswered:

  • Where do I get the starting position? Answer: look at the Vector Vector Rotation Sensor - you can use the Quaternion received from there as initialization
  • How to "sum" q and a gyroscope?

Depending on the current rotation representation: if you use a rotation matrix, a simple matrix multiplication should complete the task, as suggested in the comments (note that this implementation of the matrix multiplication is not effective!):

 /** * Performs naiv n^3 matrix multiplication and returns C = A * B * * @param A Matrix in the array form (eg 3x3 => 9 values) * @param B Matrix in the array form (eg 3x3 => 9 values) * @return A * B */ public float[] naivMatrixMultiply(float[] B, float[] A) { int mA, nA, mB, nB; mA = nA = (int) Math.sqrt(A.length); mB = nB = (int) Math.sqrt(B.length); if (nA != mB) throw new RuntimeException("Illegal matrix dimensions."); float[] C = new float[mA * nB]; for (int i = 0; i < mA; i++) for (int j = 0; j < nB; j++) for (int k = 0; k < nA; k++) C[i + nA * j] += (A[i + nA * k] * B[k + nB * j]); return C; } 

To use this method, imagine that mRotationMatrix contains the current state, these two lines do the job:

 SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector); mRotationMatrix = naivMatrixMultiply(mRotationMatrix, deltaRotationMatrix); // Apply rotation matrix in OpenGL gl.glMultMatrixf(mRotationMatrix, 0); 

If you decide to use Quaternions, imagine again that mQuaternion contains the current state:

 // Perform Quaternion multiplication mQuaternion.multiplyByQuat(deltaRotationVector); // Apply Quaternion in OpenGL gl.glRotatef((float) (2.0f * Math.acos(mQuaternion.getW()) * 180.0f / Math.PI),mQuaternion.getX(),mQuaternion.getY(), mQuaternion.getZ()); 

Multiplication of quaternions is described here - equation (23) . Make sure you apply the multiplication correctly, as it is not commutative!

If you just want to know the rotation of your device (I assume that this is what you ultimately want), I highly recommend the ROTATION_VECTOR-Sensor. Gyroscopes, on the other hand, are quite accurate for measuring rotational speed and have a very good dynamic response, but suffer from drift and do not give you absolute orientation (toward magnetic north or gravity).

UPDATE If you want to see the full example, you can download the source code for a simple demo-app from https://bitbucket.org/apacha/sensor-fusion-demo .

+18
source

It makes sense to me. Acceleration sensors usually work by changing the measured value when a force is applied to the measured axis. For instance. if gravity falls on a sensor measuring this axis, it conducts electricity better. So, now you can tell how heavy gravity or acceleration pulls in some direction. Easily.

Meanwhile, gyroscopes are things that rotate (OK, or bounce back and forth in a straight line, like a shield for diving). The gyroscope is spinning, now you are spinning, the gyroscope will look like it is spinning faster or slower depending on the direction you were spinning. Or, if you try to move him, he will resist and try to continue along the path. This way you just get a change in rotation by measuring it. Then you need to figure out the strength of the change by combining all the changes over time.

Usually none of these things is one of the sensors. They often represent 3 different sensors located perpendicular to each other and measuring the other axis. Sometimes all the sensors are on the same chip, but they are still different from each other on the chip.

0
source

Source: https://habr.com/ru/post/1500253/


All Articles