Fixed-precision floating-point types, those supported by your floating-point processor ( float , double , real ), are not optimal for any calculations that require many precision digits, such as the example you gave.
The problem is that these floating point types have a finite number of precision digits (actually binary digits) that limit the length of the number that can be represented by this type of data. The float type has a limit of approximately 7 decimal digits (for example, 3.141593); double type is limited to 14 (for example, 3.1415926535898); and the type real has a similar limit (slightly larger than double ).
Adding extremely small numbers to a floating point value will cause these numbers to be lost. See what happens when we add the following two float values:
float a = 1.234567f, b = 0.0000000001234567 float c = a + b; writefln("a = %fb = %fc = %f", a, b, c);
Both a and b are valid float values ββand store separately about 7 digits of precision. But when added, only the very first 7 digits are saved, because it returns to the float:
1.2345670001234567 => 1.234567|0001234567 => 1.234567 ^^^^^^^^^^^ sent to the bit bucket
So, c ends with a , because the smaller precision digits from adding a and b are removed.
Here is another explanation of the concept , probably much better than mine.
The answer to this problem is arbitrary arithmetic. Unfortunately, arbitrary precision arithmetic support is not in the CPU hardware; therefore it is not (usually) in your programming language. However, there are many libraries that support arbitrary-precision floating-point types and the math you want to do on them. See this question for some suggestions. You probably won't find D-specific libraries for this today, but there are many C libraries (GMP, MPFR, etc.) that should be easy enough to use in isolation, and even more so if you can find D for one of them.