Saving two float values ​​in one float variable

I would like to store two float values ​​in one 32-bit float variable. Encoding will happen in C #, while decoding should be done in the HLSL shader.

The best solution I have found so far is a complicated posting of the decimal fraction offset in the encoded values ​​and storing them as an integer and decimal number of the floating β€œcarrier”:

123.456 -> 12.3 and 45.6 

It cannot handle negative values, but that's fine.

However, I was wondering if there is a better way to do this.

EDIT: A few additional details about the task:

I work with a fixed data structure in Unity, where vertex data is stored as a float. (Float2 for UV, float3 - normal, etc.). Apparently, there is no way to correctly add additional data, so I have to work within these limits, so I decided that all this is due to a more general problem of data encoding.For example, I can sacrifice secondary UV data to transmit additional 2x2 data channels.

The goal is the shader model 3.0, but I don't mind if decoding worked reasonably well on SM2.0.

Data loss is fine as long as it is "reasonable." The expected range of values ​​is 0..64, but, as I think, 0..1 will be fine too, as it is cheap to reassign to any range inside the shader. It is important to keep accuracy as high as possible. Negative values ​​are not important.

+6
source share
1 answer

Following the recommendations of Gnietschow, I adapted algo YellPika . (This is C # for Unity 3d.)

 float Pack(Vector2 input, int precision) { Vector2 output = input; output.x = Mathf.Floor(output.x * (precision - 1)); output.y = Mathf.Floor(output.y * (precision - 1)); return (output.x * precision) + output.y; } Vector2 Unpack(float input, int precision) { Vector2 output = Vector2.zero; output.y = input % precision; output.x = Mathf.Floor(input / precision); return output / (precision - 1); } 

Quick and dirty testing gave the following statistics (1 million pairs of random values ​​in the range 0..1):

 Precision: 2048 | Avg error: 0.00024424 | Max error: 0.00048852 Precision: 4096 | Avg error: 0.00012208 | Max error: 0.00024417 Precision: 8192 | Avg error: 0.00011035 | Max error: 0.99999940 

Accuracy 4096 seems to be a sweet spot. Please note that both the packaging and unpacking in these tests were performed on the processor, so the results can be worse on the GPU if it cuts corners with floating precision.

In any case, I don't know if this is the best algorithm, but it seems good enough for my case.

+2
source

Source: https://habr.com/ru/post/949402/


All Articles