So, I was looking through the SensorEvent documentation, trying to figure out how to determine the north direction relative to this axis of the phone. I drew a small image that illustrates my concept of the coordinate system:

So, if the world coordinates are x , y and z , where the magnetic north along z and y points to the sky, and the phone coordinates are Px , Py and Pz , then I'd like to be able to calculate the projection of each vector onto another.
It seems that SENSOR_TYPE_ROTATION_VECTOR might be right to watch, but it doesn't seem to give me enough information to get all these predictions. Should I normalize ROTATION_VECTOR and add it to the axis I care about, and then pull out the components?
The other big single sensor seems to be SENSOR_TYPE_ORIENTATION, but again I don't understand what to do with these values. If I want to know the three projections of the real world coordinate system on Py , I would just rotate [0, 1, 0] along the given coordinates, for example:
Where I just got these formulas from the formula for general rotations (since it rotates a unit vector, you can just select the center column). I would think that the components of the variable Py will then be the projections of x , y and z on Py , but do I have this in the opposite direction? Is this a Py projection on each of the three axes of the real world?
Finally, I noticed that there is a getRotationMatrixFromVector () option that seems to calculate these predictions for you, but again I'm not sure that I have things all the way back. If I want to know the three projections of x . y and z on Py , will I get the second column of the rotation matrix or the second row?
(Sorry for the very verbose version of what is probably a fairly simple question, I believe that for future confused people I will be very frank about the coordinate system, which is my main problem of confusion).