Defining binary floating-point numbers in decimal format has subtle issues.
Why are values defined as long double with the suffix L, and then returned to double?
With typical binary64, the maximum final value is around 1.795e+308 or exactly.
179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368
The number of digits needed to convert to a unique double can be an integer DBL_DECIMAL_DIG (usually 17 and at least 10). In any case, the use of exponential notation is certainly obvious without excessive precision.
/* 1 2345678901234567 */ // Sorted 1.79769313486231550856124... // DBL_MAX next smallest for reference 1.79769313486231570814527... // Exact 1.79769313486231570815e+308L // gcc 1.7976931348623158e+308 // VS (just a hair closer to exact than "next largerst") 1.7976931348623159077293.... // DBL_MAX next largest if not limited by range
Different compilers cannot convert this string exactly as they hoped. Sometimes ignoring some of the least significant digits - although this is controlled by the compiler.
Another source of subtle differences in conversions, and I expect that this is why "L" is added , the floating point processor, which may not have an exact binding to the IEEE Standards, affects the calculation of double . Worse, the result may be that the constant 1.797...e+308 converted to infinity due to minute errors of the "code to double " conversion using double math. When converting to long double , those long double conversion errors are very small. Then, converting the result of long double to double rounded to the expected number.
In short, forcing L math ensures that a constant will not be inadvertently made endlessly.
I would expect that the following, which does not comply with either gcc or VS, would be sufficient with the IEEE 754 FPU compliant standard.
#define __DBL_MAX__ 1.7976931348623157e+308
Returning to double should do DBL_MAX a double . This will meet many code expectations that DBL_MAX is double , not a long double . I do not see a specification that requires this, however.
Why is DBL_MIN_10_EXP defined with -307, but the minimum is -308?
This should be consistent with the definition of DBL_MIN_10_EXP . "... the minimum negative integer such that 10 raised to this power is in the range of normalized floating point numbers." The integer answer is between -307 and -308, so the minimum integer in the range is -307.
part of the observation
Although VS treats a long double as a separate type, it uses the same encoding as a double , so there are no numerical advantages when using L