In .NET, a float is represented using IEEE binary32 as a single-precision floating-point number stored using 32 bits. Apparently, the code builds this number by collecting bits in an int , and then translates it into a float using unsafe . Listing is what reinterpret_cast is called in terms of C ++, where the conversion is not performed when the translation is performed - the bits are simply reinterpreted as a new type.

The collected number 4019999A in hexadecimal or 01000000 00011001 10011001 10011010 in binary format:
- The sign bit is 0 (this is a positive number).
- The exponent bits are
10000000 (or 128), which results in an exponent of 128 - 127 = 1 (the fraction is multiplied by 2 ^ 1 = 2). - Fractional digits
00110011001100110011010 , which, if nothing else, have almost no recognizable pattern of zeros and ones.
The returned float has the same bits as 2.4, converted to a floating point, and the whole function can simply be replaced with the 2.4f literal.
The last zero that “breaks the bit pattern” of the fraction can possibly make the float match what could be written using a floating point literal?
So what is the difference between regular cast and this strange "unsafe act"?
Assume the following code:
int result = 0x4019999A // 1075419546 float normalCast = (float) result; float unsafeCast = *(float*) &result; // Only possible in an unsafe context
The first actor takes an integer 1075419546 and converts it to his floating point representation, for example. 1075419546f . This includes calculating the sign bits, exponent, and fractions needed to represent the original integer as a floating point number. This is a nontrivial calculation that needs to be done.
The second cast is more sinister (and can only be done in an unsafe context). &result takes the address of result , returning a pointer to the place where the integer 1075419546 . The pointer dereference operator * can then be used to retrieve the value that the pointer points to. Using *&result will retrieve the integer stored in the location, but by first hovering over the float* (pointer to float ), the float from the memory cell is retrieved instead, which results in float 2.4f to unsafeCast . So the narrative *(float*) &result gives me a pointer to result and assumes the pointer is a pointer to a float and retrieves the value that the pointer points to.
Unlike the first throw, the second throw does not require any calculations. It just drags the 32 bits stored in result to unsafeCast (fortunately also 32 bits).
In general, executing this type can fail in many ways, but using unsafe , you tell the compiler that you know what you are doing.