I read many times in articles and MSDN that a float (or double ) does not have an exact representation from real-world integers or decimal values. Right! This is evident when equality tests go wrong, as well as when validating simple addition or subtraction tests.
It is also said that a float does not have an exact representation of decimal values, like 0.1, but then if we declare a float in visual studio, for example float a = 0.1f; how do they display the exact 0.1 when debugging? It should show something like 0.09999999.. Where I will skip the link to understand it.

This is a layman's question or maybe I'm still missing out on some concepts!
source share