Why float shows exact representation when declaring

I read many times in articles and MSDN that a float (or double ) does not have an exact representation from real-world integers or decimal values. Right! This is evident when equality tests go wrong, as well as when validating simple addition or subtraction tests.

It is also said that a float does not have an exact representation of decimal values, like 0.1, but then if we declare a float in visual studio, for example float a = 0.1f; how do they display the exact 0.1 when debugging? It should show something like 0.09999999.. Where I will skip the link to understand it.

enter image description here

This is a layman's question or maybe I'm still missing out on some concepts!

+6
source share
1 answer

how do they display the exact 0.1 when debugging

0.1 is not an accurate float value. This is what you specified in the original job, but it is not a float value. I see that this is confusing :) I suspect that the debugger shows the shortest string representation that uniquely ends with the same value.

Try using:

 float a = 0.0999999999f; 

... and then I suspect that in the debugger you will see this as 0.1 .

So, this does not mean that the debugger displays a "more accurate" value - it is that it displays a "more convenient" view.

If you want to display the exact value stored in a float or double, I have a code that you can use for this .

+15
source

Source: https://habr.com/ru/post/989785/


All Articles