Entering a double precision number

I am writing some astronomical programs, and I have the source code for implementing Jeffrey Sachs algorithms in Meeus' book Astronomical Algorithms.

One of the functions he wrote is ReadReal (), which receives a real number from a user (via a keyboard or terminal). The extract from this function is as follows:

scanf("%lf", &r); return r * 1.000000000000001; 

Multiplication by a constant in the second line obviously has something to do with rounding, but I don’t see exactly what. I searched for answers, and the constant appears in many places on different sites, but not in this context. Does anyone have experience with this or know what is going on here? It is important?

Thanks for any help.

+6
source share
2 answers

This refers to the density parameter , Ω , defined as the average density of the substance of the universe, divided by the critical value of this density. This selects one of three possible geometries depending on whether Ω is equal to less than or greater than 1. They are called respectively flat, open, and closed universes. Look at the image below for a visual presentation.

Visual representation of the density parameter

The value of the indicated Ω is determined in the Big Bang Theory and gives rise to what is known as the Flatness Problem . For more information on this, go to the wiki again .

To understand the importance of the density parameter, look at the ultimate fate of the Universe ; it also gives a more complete explanation of Ω.

+6
source

Floating point numbers in most architectures (which use the IEEE754 representation ) can only represent numbers that have a finite binary extension , i.e. precisely represented by a number like 11.00100100001 (and the length of the string is limited by the size of the floating point type, for example 53 for double ).

Any number that does not belong to this form, i.e. is not a finite sum of powers of two, for example 1/3 or 1/5 or 1/10, it can be expressed exactly never with the help of such a variable with a floating point.

Since users often enter values, such as 0.1 , and not more than apt 0.125 , this loss of accuracy often occurs quite early in settings like yours. Multiplication by this constant is one of the ways that the author on his platform found closer to what, in his opinion, was intended for the user. All this is subjective. If you just print with short precision, printf("%0.5f", x) , you should not notice a lack of precision.

+3
source

Source: https://habr.com/ru/post/891189/


All Articles