The number of decimal digits in C

I like to change the number of decimal digits that are shown whenever I use a floating point number in C. Does it have something like the value of FLT_DIG defined in float.h ? If so, how can I change this from 6 to 10?

I get a number like 0.000000 , while the actual value is 0.0000003455 .

Thank you very much.

+4
source share
4 answers

There are two separate problems here: the accuracy of the stored floating point number, which is determined using float vs double , and then the accuracy of the number printed as such:

 float foo = 0.0123456789; printf("%.4f\n", foo); // This will print 0.0123 (4 digits). double bar = 0.012345678912345; printf("%.10lf\n", bar); // This will print 0.0123456789 
+10
source

I experimented with this problem and I found out that you cannot have a lot of floating point precision, they are very bad. But if you use double it will give you the correct answer. just specify% .10lf for precision to ten decimal points

0
source

You are running out of accuracy. Floats don't have much precision, if you want more decimals, use the double data type.

Also, it seems that you are using printf() and co. to display numbers - if you ever decided to use double instead of float s, remember to change the format specifiers from %f to %lf - for double.

-1
source

@kosmoplan - thanks for the nice question!

@epsalon - thanks for the good answer. My first thought was also floating versus double. I was wrong. You hit him on the head, realizing that this is really a printf / format problem. Good job!

Finally, to calm some lingering peripheral disputes:

 /* SAMPLE OUTPUT: a=0.000000, x=0.012346, y=0.012346 a=0.0000003455, x=0.0123456791, y=0.0123456789 */ #include <stdio.h> int main (int argc, char *argv[]) { float x = 0.0123456789, a = 0.0000003455; double y = 0.0123456789; printf ("a=%f, x=%f, y=%lf\n", a, x, y); printf ("a=%.10f, x=%.10f, y=%.10lf\n", a, x, y); return 0; } 
-1
source

Source: https://habr.com/ru/post/1438172/


All Articles