Why is arbitrary precision in double literals allowed in Java?

I just found out from the Peter Lawreys post that this is a valid expression and evaluates to true .

 333333333333333.33d == 333333333333333.3d 

My question is why he is allowed to have double literals that cannot be represented in double, and whole literals that cannot be represented are forbidden. What is the rationale for this decision.


Lateral note, I can actually cause a range compilation error for two-word literals :-)

 99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999d 

So, as long as we are in the range (min, max), the literal is approaching, but, going beyond this, it seems that the compiler refuses to bring it closer.

+6
source share
2 answers

The problem is that the very few decimal places you can enter can be represented exactly as an IEEE float. Therefore, if you delete all inaccurate constants, you will make using double literals very cumbersome. Most of the time, the “pretend we can imagine” behavior is much more useful.

+9
source

The main reason, apparently, is that Java simply cannot tell when your precision is running out, because there is no code for the CPU to do this.

Why is there no CPU flag or the like? Because the representation of a number simply does not allow. For example, even prime numbers like "0,1" do not have a definite representation. 0.1 gives you "00111111 10111001 10011001 10011001 10011001 10011001 10011001 10011010" (see http://www.binaryconvert.com/result_double.html?decimal=048046049 ).

This value is not exactly 0.1, but 1.00000000000000005551115123126E-1 .

Therefore, even for these “simple” cases, the code would have to throw an exception.

+2
source

Source: https://habr.com/ru/post/895131/


All Articles