Range of floating point numbers in .NET?

Excerpt from the book:

A float value consists of a 24-bit signed mantissa and an 8-bit subscription metric. Accuracy is approximately seven decimal digits. Values ​​range from -3.402823 Γ— 10 ^ 38 to 3.402823 Γ— 10 ^ 38

How to calculate this range? Can someone explain binary arithmetic?

+3
source share
2 answers

I would definitely read the article that Richard points to. But if you need a simpler explanation, I hope this helps:

Basically, as you said, there is 1 sign bit, 8 bits for the exponent, and 23 for the fraction. Then using this equation (from Wikipedia )

N = (1 - 2s) * 2^(x-127) * (1 + m*2^-23)

s - , x - ( 127 ), m - , ( ).

, 0xFF . , 0xFE.

,

N = (1 - 2*0) * 2^(254-127) * (1 + (2^23 - 1) * 2^-23)

N = 1 * 2^127 * 1.999999

N = 3.4 x 10^34

, , , -3.4 X 10^34.

Q.E.D.

+1

" " , , , .

+7

Source: https://habr.com/ru/post/1716100/


All Articles